Hacker News new | past | comments | ask | show | jobs | submit login

This is neat, but I'm a little confused on those benchmark numbers and what they mean exactly. For example, with 10% or 50% packet loss you aren't going to get a TCP stream to do anything reasonable. They will seem to just "pause" and make very, very slow progress. When we talk about loss scenarios, we are typically talking about single digit loss and more often well under 1%. Scenarios of 10 to 50% loss are catastrophic where TCPs effectively cease to function, so if this protocol works well in that that environment, it is an impressive feat.

EDIT: It probably also needs to be clarified which TCP algorithm they are using. TCP standards just dictate framing/windowing, etc. but algorithms are free to use their own strategies for retransmissions and bursting, and the algorithm used makes a big difference in differing loss scenarios.

EDIT 2: I just noticed the number in parens is transfer success rate. Seeing 0% for 10 and 50% loss for TCP sounds about right. I'm not sure I still understand their UDP #'s as UDP isn't a stream protocol, so raw transferred data would be 100% minus loss %, unless they are using some protocol on top of it.




With the # in parens being success rate, the timing makes little sense to me.

A TCP connection with 10% loss will work and transfer (it's gonna suck) and be very very very slow but their TCP 10% loss example is faster somehow?

TCP being reliable will just get slower with loss until it eventually can't transmit any successful packets within some TCP timeout (or some application-level timeout).

Even a 50% packet loss connection will work, within some definitions of the word "work" and also this brings up the biggest missing point in that chart: this all depends heavily on latency.

50% loss on a connection with 1ms latency is much more tolerable than 1% loss on a connection with 1000ms latency and will transfer faster (caveats around algorithms and other things apply, but this is directionally correct).

A real chart for this would be a graph where X is the % of packet loss and Y is the latency amount with distinct lines per protocol (really one line per defined protocol configuration, eg tcp cubic w/nagle vs without and with/without some device doing RED in the middle or different RED configurations, etc, many parameters to test here).

If this sounds negative, it's not, I think the research around effective high-latency protocols is very interesting and important. I was thinking recently (probably due to all the SpaceX news) about what the internet will look like for people on the moon or on mars. The current internet will just not work for them at all. We will require very creative solutions to make a useful open internet connections which isn't locked down to Apple/Facebook/Google/X/Netflix/etc.


A big problem with TCP's loss avoidance is that it assumes the cause of loss is congestion, so if you use it on a network with random packet loss it can just get slower and slower and slower until it effectively stops working entirely.


Agreed that it's a problem but probably not a huge one today on the internet since most loss is actually a result of congestion.

It has led to some weird things whereby the L2 protocols (thinking wifi and LTE/cellular) have their own reliability layer to combat the problem you're describing. I'm not sure if things would be better or worse if they didn't do this and TCP was responsible, the iteration of solutions for it would be much slower and probably could never be as good as the current situation where the network presents a less-lossy layer to TCP.

We have to completely rethink things for interplanetary networking.

I thought I recognized your username, I remember jailbreaking the original iPhone on IRC with you helping :)


> I thought I recognized your username, I remember jailbreaking the original iPhone on IRC with you helping :)

Yep, same author of Cydia [1].

[1] https://en.wikipedia.org/wiki/Cydia


Sure, but this is a simulator with random packet loss that has nothing to do with congestion, isn't it?


That's what TCP-SACK (Westwood variant) and SCPS-TP are for, to deal with packet loss due to bit corruption as well as congestion.

https://egbert.net/images/tcp-evolution.png

https://en.m.wikipedia.org/wiki/Space_Communications_Protoco....


Similarly, I wonder if nyxpsi has congestion control? It's probably tricky to implement if you are trying work with a crap network. I guess you could see how packet loss or latency responds to throughput, but then you need to change the throughput periodically while transmitting, which slows down the transfer.


not sure what ur asking then, it uses the standard UDP implementation and the benchmarks have to do with transferring the entirety of the sample data not by individual packet loss but rather receiving the whole context of the message. There is no retransmission. Its all or nothing...


Yes, esp comparing against "UDP" this way doesn't make sense.


What's particularly confusing is that this appears to be coming from the dev team themselves -- this isn't some rando pointing at an interesting unprepared github they found.

Like, if they're looking for publicity by sharing their github page, I'd expect the readme to have a basic elevator pitch, but their benchmarking section is a giant category error and it's missing even the most high level of summary as to what it is doing to achieve good throughput at high packet loss rates.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: