Conversation
|
( #3393 ) |
|
Any pros/cons compared to the Janm's? One pro obviously is that this is much simpler |
|
The functionallity should be the same, i don‘t see any cons and other than its simpler no pros. Only functonality differents is that i disabled consolidating while the player might be in config state. To ensure the server switches are as fast as possible. ( But maybe this could be removed. |
Nah that sounds like a good idea |
good idea to remove it? or to have it disabled while config state? xd |
|
No keep as is |
|
Alright |
|
Here is a direct download link to the compiled binary with this patch applied If someone is intrested in testing it |
|
@andreasdc would you like to test it? |
|
Looking at my impl copied form netty, we are not flushing on disconnect/close/exception/writeability change/handlerRemoved here. I think these should be added. |
|
Also I think a value of 20 is kinda low. We might not even fill up a whole tcp packet of 1500 bytes with 20 of the most common packets being very small (delta move-rot <=14 / single block change <= 13 /block destroy stage <= 14 / animation <= 5 / exporb <=31 / statistic <= 15). As we flush anyway once reading from the other connection is finished temporarily, the counting for flushes during read is only to free up internal buffers of the connection. Therefore it might be more beneficial if we use packet size instead of packet amount for this matter. |
i used 20 because you also did |
as far as i know we don't need to flush on close exception or disconnect. But we should flush on writeability change, also bungee never remove something from the pipe at time i enable consolidation so handlerRemoved is also not required i guess |
isnt there a risk of higher latency if we do so? its very important to me that we ensure no latency increase (20 flushed to 1 flush is already a big performance increase) |
Oh, well that number was pulled out of thin air (i did think only a very short time about that number). Should've done better at that time. Looking at the common packet sizes I mentioned, it might not be optimal.
I would guess that might only be if there is some large chunk packet which we are now waiting to get processed with the new threshold, keeping smaller packets from before to get sent. In general we're both just doing a bunch of guesswork, some measurements are needed, e.g. on write, save nanoTime into a long[22] array (i think threshold 20 leads to up to 21 packets until flush + 1 spare if i messed up), compare with nanoTime just before flush call & print to console or so. |
i understand, i also just put 20 there without much thinking about it, but it seemed using 40 instead of 20 makes no differents in latency (maybe more also has no differents)
I fully agree this needs more testing. |
Inspired by @Janmm14 #3393
I impled a simple flush consolidation without modifing the netty pipeline this change consists of tweaks in PacketHandlers, channelWrapper and HandlerBoss
There is a system variable to set the max consolidate amount
Consolidating is enabled after the backend receives the login packet to ensure all packets in login and config are sent without delay and cannot get lost.
And is disabled while switching to another server
It would be nice if it would be tested.
I will probably test it on network of a friend but more is better (i tested it on my localhost everything was working fine for me)
Closes #3392