Gentoo Forums
Gentoo Forums
Gentoo Forums
Quick Search: in
[split] In the weeds, ActivityPub and file sharing
View unanswered posts
View posts from last 24 hours

Goto page 1, 2  Next  
Reply to topic    Gentoo Forums Forum Index Gentoo Chat
View previous topic :: View next topic  
Author Message
erm67
Guru
Guru


Joined: 01 Nov 2005
Posts: 329
Location: EU

PostPosted: Tue Dec 25, 2018 10:20 am    Post subject: [split] In the weeds, ActivityPub and file sharing Reply with quote

Split from Gentoo Linux on g+ social network, where will it move? --pjp

NeddySeagoon wrote:
The concept makes my head hurt.

When I was young, I didn't have a wristwatch or moblie phone.
It wan common to be told you can go out but be home when the street lights come on.
I used to actually meet people.

Some people fucks a lot females IRL using social media :lol: :lol: :lol: :lol: :lol: it was a lot more complicated have multiple relationships at the same time before.

pjp wrote:
erm67 wrote:
Basically I register on my pleroma instance that runs inside my home and than I am able to be followed on every platform in the fediverse and I can follow people from every other platform/server in the fediverse. Of course personal data video and pictures marked as public will be copied outside and no longer be private, since digital copies cannot be revoked
Thanks for the info. email and newsgroups seemed to have solved the problem with fewer shiny complications. I'm looking forward to being completely out of touch with "modern technology solutions."


Well as an old time USENET use I can say places like mastodon actually look a lot like the old time USENET, it is really an evolution of the old newsgroup. There are places like the old alt.* hierarchy with drugs and prostitutes and serious channel like the nextcloud one, post in a mastodon channel is actually the closest thing to a USENET newsgoup in the old times. But news server where centralized and controlled the fediverse is selfhosted, there is a lot more freedom. Some servers are only available using tor. Also when google took over USENET most people just left. The old times were cool but old time tools (the news protocol) look now primitive and insecure. You need a letsencrypt certificate to connect a server and messages are crypto signed ....
I remember that I was posting on alt.drugs using the news server of my provider, well at a certain point the bastards posted my real name and (old) address on alt.drugs complaining my info were not complete .....

BTW a raspi or an inexpensive TV box are perfect to run a fediverse node for family and friends or just to have a digital identity recognized in the whole fediverse.

I wouldn't be surprised if websites like reddit will start supporting ActivityPub soon, that would also solve the problem of where the gentoo groups would go, since it would make the gentoo subreddit available to the entire fediverse.
_________________
True ignorance is not the absence of knowledge, but the refusal to acquire it.
Ab esse ad posse valet, a posse ad esse non valet consequentia

My fediverse account: @erm67@erm67.dynu.net
Back to top
View user's profile Send private message
pjp
Administrator
Administrator


Joined: 16 Apr 2002
Posts: 17758

PostPosted: Tue Dec 25, 2018 6:19 pm    Post subject: Reply with quote

erm67 wrote:
old time tools (the news protocol) look now primitive and insecure.
Well, sure, they were abandoned. I'm thinking of stuff like Web 2.0, which IMO adds more hard to identify security holes than any improvement offered. But thanks for the additional information. If it does "take over," maybe I'll look into some future iteration. Or maybe eventually something comes along requiring it that piques my interest.

erm67 wrote:
BTW a raspi or an inexpensive TV box are perfect to run a fediverse node for family and friends or just to have a digital identity recognized in the whole fediverse.
How much care and feeding are required? I've avoided running my own services, VPN or hosted servers because I don't want to have to SA more stuff.

erm67 wrote:
I wouldn't be surprised if websites like reddit will start supporting ActivityPub soon, that would also solve the problem of where the gentoo groups would go, since it would make the gentoo subreddit available to the entire fediverse.
I don't recall reading anything about ActivityPub, but I just checked, and it concerns me that it is developed by the W3C: W3C abandons consensus, standardizes DRM, EFF resigns. And also the appearance of being a rubber stamp for Google: SPDY, QUIC and AMP.
_________________
I honestly think you ought to sit down calmly, take a stress pill, and think things over.
Back to top
View user's profile Send private message
Ant P.
Watchman
Watchman


Joined: 18 Apr 2009
Posts: 5584

PostPosted: Tue Dec 25, 2018 6:57 pm    Post subject: Reply with quote

Not to mention ActivityPub is so broken and downright dangerous as a spec it's already been forked.

Who designs a communication protocol with the polar opposite of plausible deniability in 2018?
Back to top
View user's profile Send private message
erm67
Guru
Guru


Joined: 01 Nov 2005
Posts: 329
Location: EU

PostPosted: Tue Dec 25, 2018 7:58 pm    Post subject: Reply with quote

Ant P. wrote:
Not to mention ActivityPub is so broken and downright dangerous as a spec it's already been forked.

Who designs a communication protocol with the polar opposite of plausible deniability in 2018?


Sure I see a completely different stuff, funny it's a dialect supported by pleroma, that makes it one of the many protocols in the fediverse. It has a new name that's an improvement since various implementations of the same OStatus standard cannot interoperate :-)
It's probably not a wanted feature that all the fediverse speaks the same language/protocol anyway; as long as it is possible to establish a route, maybe through a pleroma gateway, the servers can use whatever standard they want.

pjp wrote:
erm67 wrote:
old time tools (the news protocol) look now primitive and insecure.
Well, sure, they were abandoned. I'm thinking of stuff like Web 2.0, which IMO adds more hard to identify security holes than any improvement offered. But thanks for the additional information. If it does "take over," maybe I'll look into some future iteration. Or maybe eventually something comes along requiring it that piques my interest.

Well "take over" is a bit too much, the fediverse is mostly collecting rejects from other social, like the accounts that were banned for pornography from Tumbr recently.
pjp wrote:

erm67 wrote:
BTW a raspi or an inexpensive TV box are perfect to run a fediverse node for family and friends or just to have a digital identity recognized in the whole fediverse.
How much care and feeding are required? I've avoided running my own services, VPN or hosted servers because I don't want to have to SA more stuff.

You plan run gentoo on it and ask about time spent maintaining it?
Now actually it is a bit too soon, pleroma is alpha software, mastodon and diaspora requires a ruby stack are a bit hard to install and manage and sometimes throw misterious ruby error messages, friendica and hubzilla are more mature but require a lamp stack that is maybe not exactly easy, the social app of nextcloud is an alpha and runs on NC 15 that was released just weeks ago ......
However we are in 2018 and there are docker images available that work more or less out the box for most of them.

pjp wrote:

erm67 wrote:
I wouldn't be surprised if websites like reddit will start supporting ActivityPub soon, that would also solve the problem of where the gentoo groups would go, since it would make the gentoo subreddit available to the entire fediverse.
I don't recall reading anything about ActivityPub, but I just checked, and it concerns me that it is developed by the W3C: W3C abandons consensus, standardizes DRM, EFF resigns. And also the appearance of being a rubber stamp for Google: SPDY, QUIC and AMP.


I am using SPDY aka http2 and Brotli compression on all server, QUIC is probably necessary at some point TCP checksumming is obsolete retransmission are rare and a better job can be done at a higher level. What is wrong with AMP anyway?

You can't teach an old dog new tricks, not to all old dogs at least ....
_________________
True ignorance is not the absence of knowledge, but the refusal to acquire it.
Ab esse ad posse valet, a posse ad esse non valet consequentia

My fediverse account: @erm67@erm67.dynu.net
Back to top
View user's profile Send private message
Ant P.
Watchman
Watchman


Joined: 18 Apr 2009
Posts: 5584

PostPosted: Tue Dec 25, 2018 9:31 pm    Post subject: Reply with quote

erm67 wrote:
It's probably not a wanted feature that all the fediverse speaks the same language/protocol anyway; as long as it is possible to establish a route, maybe through a pleroma gateway, the servers can use whatever standard they want.

It's already the case that nobody speaks the same language because ActivityPub as written is not a complete protocol, it's unimplementable in the same way OOXML is. It's not possible to write a conforming implementation that can communicate with other servers correctly without 4 or 5 other side channels that the spec authors refuse to acknowledge the existence of.

pjp's hunch is on the mark here, this is all pie in the sky.
Back to top
View user's profile Send private message
pjp
Administrator
Administrator


Joined: 16 Apr 2002
Posts: 17758

PostPosted: Wed Dec 26, 2018 6:42 am    Post subject: Reply with quote

erm67 wrote:
Well "take over" is a bit too much, the fediverse is mostly collecting rejects from other social, like the accounts that were banned for pornography from Tumbr recently.
If it attracts more people, then it may become a viable contender. Porn likely contributed appreciably to the success of the other platforms. And site the name doesn't have negative connotations (such as a dedicated porn site), there won't be much stigma around using it. The more people who are banned, the more incentive to find an alternative.

erm67 wrote:
You plan run gentoo on it and ask about time spent maintaining it?
I don't plan to precisely because of the time involved. Compiling upgrades is a notable but minor issue.Every new service adds at least one new log to be monitored, and ever new server adds multiple logs. That becomes a job very quickly.

erm67 wrote:
Now actually it is a bit too soon, pleroma is alpha software, mastodon and diaspora requires a ruby stack are a bit hard to install and manage and sometimes throw misterious ruby error messages, friendica and hubzilla are more mature but require a lamp stack that is maybe not exactly easy, the social app of nextcloud is an alpha and runs on NC 15 that was released just weeks ago ......
However we are in 2018 and there are docker images available that work more or less out the box for most of them.
Yeah, Docker. No thanks.

erm67 wrote:
What is wrong
pjp wrote:
the appearance of being a rubber stamp for Google


erm67 wrote:
You can't teach an old dog new tricks, not to all old dogs at least ....
A pup may learn a trick new to it, but it is rare that the trick is actually new.
_________________
I honestly think you ought to sit down calmly, take a stress pill, and think things over.
Back to top
View user's profile Send private message
erm67
Guru
Guru


Joined: 01 Nov 2005
Posts: 329
Location: EU

PostPosted: Wed Dec 26, 2018 9:09 pm    Post subject: Reply with quote

Ant P. wrote:
erm67 wrote:
It's probably not a wanted feature that all the fediverse speaks the same language/protocol anyway; as long as it is possible to establish a route, maybe through a pleroma gateway, the servers can use whatever standard they want.

It's already the case that nobody speaks the same language because ActivityPub as written is not a complete protocol, it's unimplementable in the same way OOXML is. It's not possible to write a conforming implementation that can communicate with other servers correctly without 4 or 5 other side channels that the spec authors refuse to acknowledge the existence of.

pjp's hunch is on the mark here, this is all pie in the sky.

Yeah I see, you should really drop them an email explaining what they're doing wrong ...

pjp wrote:
If it attracts more people, then it may become a viable contender. Porn likely contributed appreciably to the success of the other platforms. And site the name doesn't have negative connotations (such as a dedicated porn site), there won't be much stigma around using it. The more people who are banned, the more incentive to find an alternative.

For some people it is already the alternative, at how many registered users does a social network platform become noteworthy for you?
pjp wrote:
Every new service adds at least one new log to be monitored, and ever new server adds multiple logs. That becomes a job very quickly.

pjp wrote:
Yeah, Docker. No thanks.

According to the pleroma developers it requires so little resources that it can be deployed to a 2.5$/month Vultr instance,
_________________
True ignorance is not the absence of knowledge, but the refusal to acquire it.
Ab esse ad posse valet, a posse ad esse non valet consequentia

My fediverse account: @erm67@erm67.dynu.net
Back to top
View user's profile Send private message
pjp
Administrator
Administrator


Joined: 16 Apr 2002
Posts: 17758

PostPosted: Wed Dec 26, 2018 9:42 pm    Post subject: Reply with quote

erm67 wrote:
For some people it is already the alternative, at how many registered users does a social network platform become noteworthy for you?
Depends on the definition of noteworthy. I personally don't use any "social network platform." Unless you consider f.g.o to be such a platform. A number of people use private file sharing, which is very noteworthy to them, but not at all noteworthy to me (although I have considered trying a p2p file system as a type of "NAS").

A more direct answer would be that it is noteworthy to me when it is a solution to a problem I'm trying to solve, and if I think it will remain a solution (it isn't likely to stagnate or be abandoned because it is a project by a single developer, etc.). However, if the platform is trying to be a solution for non-technical users but is too complicated to attract such users, then it is a failure.


erm67 wrote:
pjp wrote:
Every new service adds at least one new log to be monitored, and ever new server adds multiple logs. That becomes a job very quickly.

pjp wrote:
Yeah, Docker. No thanks.

According to the pleroma developers it requires so little resources that it can be deployed to a 2.5$/month Vultr instance,
Cost wasn't a concern (although the "problem" isn't one I'd pay $2.50/month to solve). I'm guessing a Vultr instance would be a VPS that I had to monitor. I looked into Docker as a possible solution for an environment and decided it offered nothing of value and would significantly increase administration workload. Containers might be the correct solution for some environments, but I haven't worked there.
_________________
I honestly think you ought to sit down calmly, take a stress pill, and think things over.
Back to top
View user's profile Send private message
erm67
Guru
Guru


Joined: 01 Nov 2005
Posts: 329
Location: EU

PostPosted: Thu Dec 27, 2018 7:25 am    Post subject: Reply with quote

pjp wrote:

However, if the platform is trying to be a solution for non-technical users but is too complicated to attract such users, then it is a failure.

pjp wrote:
Containers might be the correct solution for some environments, but I haven't worked there.


Well one the thrusts behind docker, flatpacks and snaps is exactly solve the problem is too complicated to install for non-technical users, or at least that is what they did become lately :-)

p2p private file sharing is non sense unless it is illegal of course.

pjp wrote:
It appears to be "official" (the contact is pr) with 8,131 followers.

Do you think the fidelized userbase is much larger than that?
_________________
True ignorance is not the absence of knowledge, but the refusal to acquire it.
Ab esse ad posse valet, a posse ad esse non valet consequentia

My fediverse account: @erm67@erm67.dynu.net
Back to top
View user's profile Send private message
Hu
Moderator
Moderator


Joined: 06 Mar 2007
Posts: 13485

PostPosted: Fri Dec 28, 2018 1:45 am    Post subject: Reply with quote

erm67 wrote:
p2p private file sharing is non sense unless it is illegal of course.
I am disappointed to see you make such a blanket claim. I see value in private file sharing. Consider sending family photos or videos to physically distant friends or family, but not making those photos available to the general public. Not everyone wants to receive large video files via e-mail or MMS.

Consider the use of sending a not-yet-published draft of a work to selected peers for review, prior to sending it to a publisher who will expect to charge money for creating and distributing the copies it provides to the public. The publisher would be unhappy if the draft had been available on public filesharing networks, since that would undercut the market for the finalized copies. If the entity placing the unpublished work on the network were the original author (and had not signed away the rights to the work), there is no copyright infringement in this scenario.
Back to top
View user's profile Send private message
1clue
Advocate
Advocate


Joined: 05 Feb 2006
Posts: 2510

PostPosted: Fri Dec 28, 2018 3:29 am    Post subject: Reply with quote

Hu wrote:
erm67 wrote:
p2p private file sharing is non sense unless it is illegal of course.
I am disappointed to see you make such a blanket claim. I see value in private file sharing. Consider sending family photos or videos to physically distant friends or family, but not making those photos available to the general public. Not everyone wants to receive large video files via e-mail or MMS.

Consider the use of sending a not-yet-published draft of a work to selected peers for review, prior to sending it to a publisher who will expect to charge money for creating and distributing the copies it provides to the public. The publisher would be unhappy if the draft had been available on public filesharing networks, since that would undercut the market for the finalized copies. If the entity placing the unpublished work on the network were the original author (and had not signed away the rights to the work), there is no copyright infringement in this scenario.


I almost replied to that one too.

Peer to peer file sharing has existed almost since two computers were hooked together with a cable. I haven't used one of the current p2p apps where people distribute pirated music and movies (among other things) but there are lots of scenarios where peer to peer file exchange in a private setting makes huge sense. One such exchange is with git. The Linux Kernel has no central server. Kernel developers who supply kernel.org don't use that server for their "real" development code. The git app requires no server, and it's all based on trust. There's a YouTube video of Linus talking about that. At any rate, every source file in the kernel is transferred with peer-to-peer file sharing in a private setting. Is that illegal?

I use git in peer-to-peer mode too. And it's private. Back before .com was invented, I used modems with acoustic couplers to transfer files too and from a BBS. Or to and from a friend's computer. Or across a serial cable to another computer in the room. Those are all peer-to-peer.
Back to top
View user's profile Send private message
erm67
Guru
Guru


Joined: 01 Nov 2005
Posts: 329
Location: EU

PostPosted: Sat Dec 29, 2018 2:29 pm    Post subject: Reply with quote

Hu wrote:
erm67 wrote:
p2p private file sharing is non sense unless it is illegal of course.
I am disappointed to see you make such a blanket claim. I see value in private file sharing. Consider sending family photos or videos to physically distant friends or family, but not making those photos available to the general public. Not everyone wants to receive large video files via e-mail or MMS.

Consider the use of sending a not-yet-published draft of a work to selected peers for review, prior to sending it to a publisher who will expect to charge money for creating and distributing the copies it provides to the public. The publisher would be unhappy if the draft had been available on public filesharing networks, since that would undercut the market for the finalized copies. If the entity placing the unpublished work on the network were the original author (and had not signed away the rights to the work), there is no copyright infringement in this scenario.


Well we should define p2p private file sharing first, otherwise we might be talking about different things.

If I share privately my files with my family I use private file sharing, the problem is that in my understanding p2p means a p2p protocol. I am not questioning the private file sharing part that I instead advocate I question use of p2p protocols aka syncthing amule for private file sharing. What is the added value from the p2p? Compared to solutions like nextcloud pydio seafile?
Some people used emule for that purpose years ago and did not end well btw.
_________________
True ignorance is not the absence of knowledge, but the refusal to acquire it.
Ab esse ad posse valet, a posse ad esse non valet consequentia

My fediverse account: @erm67@erm67.dynu.net
Back to top
View user's profile Send private message
Hu
Moderator
Moderator


Joined: 06 Mar 2007
Posts: 13485

PostPosted: Sat Dec 29, 2018 4:56 pm    Post subject: Reply with quote

The advantages are the same as any place where peer-to-peer is superior to central distribution: a potentially more efficient or fault-tolerant protocol; the possibility that the desired data is available from a "better" node (lower latency, better bandwidth, etc.) than if it was always sent from a central hub.

For the protocol, some networks, notably some types of cellular/wireless, are notoriously unreliable. Transferring large files to such a user over a protocol that needs to rewind and restart from the beginning if the connection drops is painful. By necessity, most peer-to-peer protocols must handle sudden connection loss gracefully and minimize lost work, because you cannot count on any given peer remaining available for the full transfer. Centralized protocols may be designed with the idea that the central server is highly reliable, therefore there is no point in spending the effort to make the protocol tolerate client faults. It's possible to design any protocol to handle unreliable transit, but peer-to-peer protocols are actively discouraged from lacking this capability.

For the origin point, consider sending the same large private blob to 3 people, all of whom use the same network, but none of whom are on a fast connection to your network. In the centralized case, everyone gets to download over the slow inter-network link. In the peer-to-peer case, the system might be able to send the blob to one recipient, then let the others retrieve it efficiently from that first recipient over the much faster localized network.
Back to top
View user's profile Send private message
erm67
Guru
Guru


Joined: 01 Nov 2005
Posts: 329
Location: EU

PostPosted: Sun Dec 30, 2018 11:59 am    Post subject: Reply with quote

Hu wrote:
The advantages are the same as any place where peer-to-peer is superior to central distribution: a potentially more efficient or fault-tolerant protocol; the possibility that the desired data is available from a "better" node (lower latency, better bandwidth, etc.) than if it was always sent from a central hub.

I have a Gbit/s eth network at home and 540Mbit/s 5ghz wireless, 50mbit/s down 5 mbit/s up VDSL, 130 mbit/s LTE is available everywhere outside. It is almost certain that CPU and memory constraints on some of my devices (since both btsync and synchthing tend to be extremely resource hungry) will negatively affect latency and bandwidth. So p2p might be useful on very slow connections, but it is very likely that if the device has a slow network it will also lack the resources to run a p2p protocol.
We are talking about a scenario where the original uploader always puts a file on the network and very few peer download it. I shot a foto on my handy and want to sync it on my kodi/libreelec box to watch it later on tv, maybe I will also sync it to a backup on the nas, but typically there will always be one uploader and few peers. You know torrents with only one seed a few leechers are terrible anyway. That is why they are ranked by the number of seeds.
In practice using p2p will waste CPU, memory and result in lower bandwidth and higher latency.
The problem is that such p2p protocols were developed to be used in a different scenario and for different purposes (like protecting the original uploader from the RIAA). On the other side if you have a nas probably it doesn't really makes sense replicate the same files on all devices I own.

Hu wrote:

For the protocol, some networks, notably some types of cellular/wireless, are notoriously unreliable. Transferring large files to such a user over a protocol that needs to rewind and restart from the beginning if the connection drops is painful.

Yes it is probably better than the worst alternative, why would one use such a protocol and what alternative uses it? Even plain HTTP can restart downloads .......... you know wget -c

Hu wrote:

By necessity, most peer-to-peer protocols must handle sudden connection loss gracefully and minimize lost work, because you cannot count on any given peer remaining available for the full transfer. Centralized protocols may be designed with the idea that the central server is highly reliable, therefore there is no point in spending the effort to make the protocol tolerate client faults. It's possible to design any protocol to handle unreliable transit, but peer-to-peer protocols are actively discouraged from lacking this capability.

It was probably the RIAA sponsored attacks on the p2p networks in the attempt to disrupt the service that forced p2p developers to use stronger integrity checks .......... some provider randomly drops packets belonging to p2p protocols. Consider that internet giants are considering to drop the TCP protocol since packet drops and retransmissions are rare ....

Code:
wlan0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.1.24  netmask 255.255.255.0  broadcast 192.168.1.255
        inet6 d  prefixlen 64  scopeid 0x20<link>
        ether   txqueuelen 1000  (Ethernet)
        RX packets 933775  bytes 1211246002 (1.1 GiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 3486565  bytes 4989450276 (4.6 GiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0


(I know that retransmissions are handled by the card firmware and not reported to the OS but that doesn't make a difference in this case)

Hu wrote:

For the origin point, consider sending the same large private blob to 3 people, all of whom use the same network, but none of whom are on a fast connection to your network. In the centralized case, everyone gets to download over the slow inter-network link. In the peer-to-peer case, the system might be able to send the blob to one recipient, then let the others retrieve it efficiently from that first recipient over the much faster localized network.


You keep considering btsync and syncthing the only alternatives to a bad user downloading with internet explorer from a bad ftp server, but that is not always true maybe it is the worst case. And back in the nineties already, when networks were really unrelaiable, people learned quickly how to restart large ftp downloads and checksum them, or take advantage of zip file checksumming.
_________________
True ignorance is not the absence of knowledge, but the refusal to acquire it.
Ab esse ad posse valet, a posse ad esse non valet consequentia

My fediverse account: @erm67@erm67.dynu.net
Back to top
View user's profile Send private message
Hu
Moderator
Moderator


Joined: 06 Mar 2007
Posts: 13485

PostPosted: Sun Dec 30, 2018 4:18 pm    Post subject: Reply with quote

For LAN, yes, it's probably a waste. For WAN, it's situation-dependent. You always want to get data from the most efficient authoritative server that has it, and in a peer-to-peer setup, that can be closer than a single centralized distributor. CDNs work around this problem for widely redistributed public content, but for intentionally private content, the only servers other than the original master that might have what you want are the other authorized peers. Users, especially non-technical ones, are notorious for choosing their transport programs for ease-of-use, not quality-of-result.
Back to top
View user's profile Send private message
pjp
Administrator
Administrator


Joined: 16 Apr 2002
Posts: 17758

PostPosted: Sun Dec 30, 2018 10:34 pm    Post subject: Reply with quote

erm67 wrote:
pjp wrote:
It appears to be "official" (the contact is pr) with 8,131 followers.


Do you think the fidelized userbase is much larger than that?
Presumably you're referring to the entire user base of all things in the "fediverse"? I have no idea, nor do I care. How is that relevant to the number of users on a Gentoo related channel, and where they might go when that channel shuts down?


erm67 wrote:
Well we should define p2p private file sharing first, otherwise we might be talking about different things.

If I share privately my files with my family I use private file sharing, the problem is that in my understanding p2p means a p2p protocol. I am not questioning the private file sharing part that I instead advocate I question use of p2p protocols aka syncthing amule for private file sharing. What is the added value from the p2p? Compared to solutions like nextcloud pydio seafile?
Since I mentioned it, I was referring to the general concept of decentralized file sharing, specifically in the sense of using a simple end user client that didn't require a lot of management overhead. Something "like" software that is or has been used for p2p file sharing. If nextcloud, pydio and seafile do that without much management overhead, then they might be suitable. If they're modern bloatware, then I'd probably pass. If it is built upon a dynamic programming language oriented toward the web, then I start with the presumption that it has a security score of WordPress on a scale of 1 to WordPress,
_________________
I honestly think you ought to sit down calmly, take a stress pill, and think things over.
Back to top
View user's profile Send private message
erm67
Guru
Guru


Joined: 01 Nov 2005
Posts: 329
Location: EU

PostPosted: Mon Dec 31, 2018 3:34 pm    Post subject: Reply with quote

pjp wrote:

Since I mentioned it, I was referring to the general concept of decentralized file sharing, specifically in the sense of using a simple end user client that didn't require a lot of management overhead. Something "like" software that is or has been used for p2p file sharing. If nextcloud, pydio and seafile do that without much management overhead, then they might be suitable. If they're modern bloatware, then I'd probably pass. If it is built upon a dynamic programming language oriented toward the web, then I start with the presumption that it has a security score of WordPress on a scale of 1 to WordPress,


Interesting p2p file sharing and security score in the same argument .............. even [url=https://en.wikipedia.org/wiki/Perfect_Dark_(P2P)]perfect dark[/url] has been hacked by the police (not the smartest hacker in the world) and that was considered secure, and allegedly used by terrorists ... According to the original authors of bittorrent obfuscation was (and is) the only security measure in the protocol.
What about onionshare?
_________________
True ignorance is not the absence of knowledge, but the refusal to acquire it.
Ab esse ad posse valet, a posse ad esse non valet consequentia

My fediverse account: @erm67@erm67.dynu.net
Back to top
View user's profile Send private message
erm67
Guru
Guru


Joined: 01 Nov 2005
Posts: 329
Location: EU

PostPosted: Mon Dec 31, 2018 6:29 pm    Post subject: Reply with quote

Hu wrote:
For LAN, yes, it's probably a waste. For WAN, it's situation-dependent. You always want to get data from the most efficient authoritative server that has it, and in a peer-to-peer setup, that can be closer than a single centralized distributor. CDNs work around this problem for widely redistributed public content, but for intentionally private content, the only servers other than the original master that might have what you want are the other authorized peers. Users, especially non-technical ones, are notorious for choosing their transport programs for ease-of-use, not quality-of-result.

BTW there is a project that aims to realize the kind of p2p we hope existed:
https://datproject.org/
I have only read about it and just browsing the docs (I know, nobody reads thew docs nowadays). it looks interesting. Once is for sure block synchronization protocols based on distributed tables like those used on today p2p protocols are not ideal.

From the docs:
Quote:
Why Dat?
Cloud services, such as Dropbox or GitHub, force users to store data on places outside of their control. Until now, it has been very difficult to avoid centralized servers without major sacrifices. Dat's unique distributed network allows users to store data where they want. By decentralizing storage, Dat also increases speeds by downloading from many sources at the same time.

Having a history of how files have changed is essential for effective collaboration and reproducibility. Git has been promoted as a solution for history, but it becomes slow with large files and a high learning curve. Git is designed for editing source code, while Dat is designed for sharing files. With a few simple commands, you can version files of any size. People can instantly get the latest files or download previous versions.

In sum, we've taken the best parts of Git, BitTorrent, and Dropbox to design Dat. Learn more about how it all works by learning our key concepts or get more technical by reading the Dat whitepaper.

Distributed Network
Dat works on a distributed network unlike cloud services, such as Dropbox or Google Drive. This means Dat transfers files peer to peer, skipping centralized servers. Dat's network makes file transfers faster, encrypted, and auditable. You can even use Dat on local networks for offline file sharing or local backups. Dat reduces bandwidth costs on popular files, as downloads are distributed across all available computers, rather than centralized on a single host.

Data History
Dat makes it easy for you to save old versions of files. With every file update, Dat automatically tracks your changes. You can even direct these backups to be stored efficiently on an external hard drive or a cloud serve by using our archiver.

Security
Dat transfers files over an encrypted connection using state-of-the-art cryptography. Only users with your unique link can access your files. Your dat link allows users to download and re-share your files. To write updates to a dat, users must have the secret key. Dat also verifies the hashes of files on download so no malicious content can be added. As long as the link isn't shared outside of your team, the content will be encrypted, though the IP addresses and discovery key may become known. Read more about security in dat.

Note: There has not been an independent security audit for Dat. Use at your own risk.


The last line makes me trust them more than other projects that brag about the security of their software.
_________________
True ignorance is not the absence of knowledge, but the refusal to acquire it.
Ab esse ad posse valet, a posse ad esse non valet consequentia

My fediverse account: @erm67@erm67.dynu.net
Back to top
View user's profile Send private message
pjp
Administrator
Administrator


Joined: 16 Apr 2002
Posts: 17758

PostPosted: Mon Dec 31, 2018 8:48 pm    Post subject: Reply with quote

erm67 wrote:
argument
That seems to be your objective. I mentioned a concept, not a specific solution. See also: https://en.wikipedia.org/wiki/Whataboutism
_________________
I honestly think you ought to sit down calmly, take a stress pill, and think things over.
Back to top
View user's profile Send private message
erm67
Guru
Guru


Joined: 01 Nov 2005
Posts: 329
Location: EU

PostPosted: Tue Jan 01, 2019 9:20 am    Post subject: Reply with quote

pjp wrote:

Since I mentioned it, I was referring to the general concept of decentralized file sharing, specifically in the sense of using a simple end user client that didn't require a lot of management overhead.

Nothing like that exists yet, a long way before a simple end "user client" is realized .... BTW what should a client of a decentralized serverless system look like :?:
pjp wrote:
Something "like" software that is or has been used for p2p file sharing.
You mean the GUI right?
pjp wrote:
If it is built upon a dynamic programming language oriented toward the web, then I start with the presumption that it has a security score of WordPress on a scale of 1 to WordPress,

So it's the language what matters, this would disqualify go ruby php javascript erlang eifel electron nodejs I suppose, and most of the modern web infrastructure, I suppose that's why gentoo users need to find a more modern platform to meet .... since most new software is written that way and apparently gentoo users that use g+ and reddit's closed source adware app don't care too much about that rule.
_________________
True ignorance is not the absence of knowledge, but the refusal to acquire it.
Ab esse ad posse valet, a posse ad esse non valet consequentia

My fediverse account: @erm67@erm67.dynu.net
Back to top
View user's profile Send private message
Bones McCracker
Veteran
Veteran


Joined: 14 Mar 2006
Posts: 1609
Location: U.S.A.

PostPosted: Tue Jan 01, 2019 2:50 pm    Post subject: Reply with quote

I don't give a fuck. Gentoo is what it is. It's not even in the top 50 distributions. At least it's different and not yet another slightly modified Ubuntu which is itself a slightly modified Debian. Fuck it.
_________________
patrix_neo wrote:
The human thought: I cannot win.
The ratbrain in me : I can only go forward and that's it.
Back to top
View user's profile Send private message
pjp
Administrator
Administrator


Joined: 16 Apr 2002
Posts: 17758

PostPosted: Tue Jan 01, 2019 6:17 pm    Post subject: Reply with quote

erm67 wrote:
pjp wrote:
Since I mentioned it, I was referring to the general concept of decentralized file sharing, specifically in the sense of using a simple end user client that didn't require a lot of management overhead.

Nothing like that exists yet, a long way before a simple end "user client" is realized .... BTW what should a client of a decentralized serverless system look like :?:
pjp wrote:
Something "like" software that is or has been used for p2p file sharing.
You mean the GUI right?
Not just the GUI. I don't think one is required for the core functionality. The client has been simple for a long time. Install it, point to the files to be shared. Are you referring to the decentralized aspect as not existing?

erm67 wrote:
I suppose that's why gentoo users need to find a more modern platform to meet .... since most new software is written that way and apparently gentoo users that use g+ and reddit's closed source adware app don't care too much about that rule.
I still think g+ was used because it was convenient (already having an address) and at the time, it wasn't the (other) Evil Place. That the platform is being discontinued is why they need to find a new platform to meet. It need not be "modern." To be truly "modern," it would need to be mobile / phone device oriented, which does not seem like a particularly functional means of "support" for Gentoo systems.
_________________
I honestly think you ought to sit down calmly, take a stress pill, and think things over.
Back to top
View user's profile Send private message
Ant P.
Watchman
Watchman


Joined: 18 Apr 2009
Posts: 5584

PostPosted: Tue Jan 01, 2019 7:30 pm    Post subject: Reply with quote

erm67 wrote:
pjp wrote:

Since I mentioned it, I was referring to the general concept of decentralized file sharing, specifically in the sense of using a simple end user client that didn't require a lot of management overhead.

Nothing like that exists yet, a long way before a simple end "user client" is realized .... BTW what should a client of a decentralized serverless system look like :?:

It'd probably look like net-im/qtox.
Back to top
View user's profile Send private message
1clue
Advocate
Advocate


Joined: 05 Feb 2006
Posts: 2510

PostPosted: Wed Jan 02, 2019 1:58 am    Post subject: Reply with quote

What is it that I'm missing here?

Decentralized serverless peer-to-peer file sharing happened before a centralized server concept happened. "Kermit" is a peer to peer file sharing protocol.

Peer to peer file sharing happened before there were network cards and before there were IP addresses. It happened when you could download a text file and actually read it easily as it came across the wire. Modern file sharing is just layers on top of that concept.

The only way that it can be illegal is if different entities own the sytems on each end, and copyrighted material is transferred without proper licensing. Or if the material is restricted by some sort of top secret classification by a government. Or some other similar scenario. It would be the content which is illegal, not the protocol or the fact of sharing files.
Back to top
View user's profile Send private message
erm67
Guru
Guru


Joined: 01 Nov 2005
Posts: 329
Location: EU

PostPosted: Wed Jan 02, 2019 1:32 pm    Post subject: Reply with quote

Since the times of emule, that is when napster servers were taken down, most php uses some kind of [url=https://en.wikipedia.org/wiki/Distributed_hash_table
]DHT[/url] to store and share informations among all nodes of the network without the need of a server. Some netwoks have multiple DHT, for example to find other nodes, other uses rendezvous (aka tracker). The original uploader loads the tracker on an anonymous secret place and after a few seeds appear disconnects to hide his identity.
The point is there are no clients but nodes of the network that act both as server and client and so share some resources with the network. Basically all nodes in the network are participating in the copyright violation and once the file is synchronized on several nodes it is impossible to know who was the original uploader of the file. Some neworks add layers of encryption and obfuscation on the top of that uses even more resources other don't let partecipants throttle upload speed. The p2p network is the server.
That is why, beside a bit of fun about the general concept of serverless client/server system (there are no clients without servers), I'd use a block synchronization p2p protocol based on distributed tables only to put my hands on some illegal content or if I want to donate a bit of resources to project I like.
A torrent with 1 seed and a couple of leechers with 1kb/s up speed that quit as soon as htey are done downloading is just a giant waste of resources, it makes only sense if the content of the download is illegal.

Of course leechers that set 1kb/s upload speed and consider their p2p program a client to download for free from multiple sources exist :-), maybe they even brag about how cool is being a leecher ...

Yeah kermit was great in the times of fidonet (host/bbs system == client/server system) or to transfer ftp dowloads completed in background on the shell account when internet access involved dial up access to a shell account :-)

Even if I dialed a friend's modem and we trasferred some files with kermit one instance was acting as the server and the other as client. More or less with the dat:// protocol I start a program serving the file (a server) and multiple clients can download from the server of from each other sharing resources, there is no tracker to protect the identity of the original uploader since it is not meant for illegal stuff. Still there is a server of some sort.
A federated network of file sharing programs (like nextclod pydio seafile) using the dat:// protocol would be the closest legal, non anonymous alternative to current p2p networks :-) allowing resource sharing among members of the federation.
_________________
True ignorance is not the absence of knowledge, but the refusal to acquire it.
Ab esse ad posse valet, a posse ad esse non valet consequentia

My fediverse account: @erm67@erm67.dynu.net
Back to top
View user's profile Send private message
Display posts from previous:   
Reply to topic    Gentoo Forums Forum Index Gentoo Chat All times are GMT
Goto page 1, 2  Next
Page 1 of 2

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum