Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Network card #63

Draft
wants to merge 319 commits into
base: 1.16-forge
Choose a base branch
from
Draft

Network card #63

wants to merge 319 commits into from

Conversation

Ktlo
Copy link
Contributor

@Ktlo Ktlo commented Aug 17, 2021

Connected issue: #54.

UDP and ICMP echo requests kinda work. Now I need a review to know if I should continue with this code or I should do it other way.

Major things that will be done later:

Main thing here is InternetProvider interface. There is the default implementation of this interface and it can be swapped with an implementation, that ServiceLoader will provide, if such implementation exists.

@Ktlo Ktlo marked this pull request as draft August 17, 2021 20:34
@fnuecke fnuecke added the enhancement New feature or request label Aug 23, 2021
@fnuecke
Copy link
Owner

fnuecke commented Aug 23, 2021

Cool, thanks! I'll try to review this when I can. Once my brain's willing to take a break from messing around with eLua :x

Copy link
Owner

@fnuecke fnuecke left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks again for the work on this. Sorry for the long wait, been super busy lately. Sprinkled some small Qs, but I'll try to recap my understanding of the architecture here. Then you can correct me where I'm misunderstanding things:

  • The InternetManager is the main message pump, moving frames queued from the real network into accessors (e.g. inet card) and back. This happens on a dedicated worker thread that is kicked off once every tick.
  • Inet access is abstracted by the interfaces representing the different layers, which build on top of each other. This is similar to how regular streams wrap (e.g. input stream into buffered input stream into gzip stream).
  • Sessions are basically used to track recent routes to keep cached data for this, e.g. the DatagramChannel, and are basically the "lowermost" layer, talking to the host network.
    • One thing that slightly rubs me the wrong way here: e.g. sendSession seems to be a mix of regular send and an onTick/update like method, e.g. to check expired stuff. Would be easier to digest/understand for me if this were split into an explicit "this happens every tick"-like method, I think.

I'm very curious how TCP will work here. I suspect having a session implementation that'll keep Sockets around or something?

Other than that, looks like a great start. I am wondering if there'd be any real interest in hooking into other layers than the lowermost, hardware adapter level (sessions from my understanding). I have a gut feeling things might be simplified a good amount if that's not a requirement. Maybe would allow getting of some copies even? I dunno. Just thinking out loud here :)

protected abstract LinkLocalLayer provideLinkLocalLayer() throws Exception;

@Override
public final LinkLocalLayer provideInternet() {
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why not just have this abstract instead and force people to implement the method? Is there a use-case where this is implemented but never provided but used differently? If this is about falling back to the default implementation, to me it'd be intuitive if this method returned an Optional<LinkLocalLayer> and the caller would fall back to the default if no other implementation is found. That way only the caller has to be concerned with what the default is.

(This comment applies for all the other providers using this pattern as well.)

Copy link
Contributor Author

@Ktlo Ktlo Sep 21, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is assumed that implementations of *LayerInternetProvider are guaranteed to override the behavior of the Internet card at one of the layers of the network stack. The method LinkLocalLayerInternetProvider::provideInternet in this case does not use the default Internet card implementation, but an implementation that does nothing in case of an error (a stub). This is done in order to not crash the server and to make it easier to write the implementation of the *LayerInternetProvider classes. In addition, I thought about the cases when, in case of an error in the implementation of SessionLayerInternetProvider, the developer of the SessionLayerProvider implementation himself wants to return the stub implementation of SessionLayer. However, the SessionLayerProvider is too high in the TCP/IP stack and therefore the default implementations of all previous levels of the stack will perform meaningless operations on packets outgoing from the virtual machine.

Looking back, I think that it was completely unnecessary. I agree that Optional<LinkLocalLayer> will be better in this case. However, I now think it makes sense to leave method's return type as it is now, while still offering access to default implementations and stubs. For example, I can introduce the methods NetworkLayerInternetProvider::defaultNetworkLayer and NetworkLayerInternetProvider::nullNetworkLayer.

public abstract class NetworkLayerInternetProvider extends LinkLocalLayer {
    protected static NetworkLayer defaultNetworkLayer() { ... }

    protected static NetworkLayer nullNetworkLayer() {
        return NullNetworkLayer.INSTANCE;
    }

    protected abstract NetworkLayer provideNetworkLayer();

    @Override
    public final LinkLocalLayer provideLinkLocalLayer() {
        final NetworkLayer networkLayer = provideNetworkLayer();
        if (networkLayer == NullNetworkLayer.INSTANCE) {
            // Optimize stabs
            return NullLinkLocalLayer.INSTANCE;
        } else {
            return new DefaultLinkLocalLayer(networkLayer);
        }
    }
}

A basic implementation of the new NetworkLayerInternetProvider class would look like this:

public abstract class MyNetworkLayerInternetProvider extends NetworkLayerInternetProvider {
    @Override
    protected NetworkLayer provideNetworkLayer() {
        try {
            // Wrap the default implementation
            return new MyNetworkLayer(defaultLinkLocalLayer());
        } catch (Exception e) {
            // Error! Return stub. Do not crush.
            printException(e);
            return nullLinkLocalLayer();
        }
    }
}

If you are satisfied with this solution, then I can do it this way. If you find it too complicated for developers implementing the *LayerInternetProvider, then I will rewrite this part using Optional as you suggested.

@Subscribe
public void handleResumingRunningEvent(final VMResumingRunningEvent event) {
isRunning = true;
InternetManager.connect(internetAccess);
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Curious: why not a remove in suspend or so? Is the removal via an isValid check because of a distrust in the lifecycle events?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure why I did this. It seems that the point was to stop communicating directly with the InternetManager object from the InternetCardItemDevice object and use only the InternetAccess interface as an intermediary. Otherwise, it turns out that I have to either enter a feedback in the InternetAccess interface (InternetAccess::close and InternetAccess::onClose), or create another method in the InternetManager object to change its state.

state = States.FINISH;
return;
default:
throw new IllegalStateException();
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe closing a closed session should just do nothing, similar to streams?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have limited the number of sent and received packets to one packet per tick. I am assuming that any update of the Session object can generate a packet. This includes invocation of the Session::close method. This will not happen with UDP, but in TCP there will be at least FIN packet. This means that the SessionLayer implementation must take into account that calling the close method on the Session object should update that object and then call the SessionLayer.Receiver::receive method. Because of this, I am assuming that the double update of the Session object indicates a bug in the SessionLayer implementation.

executor.execute(() -> {
try {
if (address.isReachable(null, echoSession.getTtl(), Config.defaultEchoRequestTimeoutMs)) {
echoResponse.set(response);
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this in combination with the read in receiveSession mean that the likelihood of these getting "dropped" when there'd more endpoints making these requests increases? Is the probability of this colliding in a single MC tick so low we don't care? Or am I just missing something? Would appreciate hearing your insights on this, thanks!

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, you understood everything correctly. I try not to store any messages and drop them in such situations in order to avoid possible exploitation of the Internet card by the players. If we assume that there is not enough one variable for storing a pending message, then we should expect that any implementation of the queue of such messages will lead to its overflow. However, if this becomes a major problem in the future, it will be possible to introduce a queue.


@Override
public void sendSession(final Session session, @Nullable final ByteBuffer data) {
if (session instanceof EchoSession) {
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can this be solved via polymorphism instead of a type switch? I.e. a session.send(ByteBuffer, PossiblyNeededContext) or something?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Initially, it was so. However, such an implementation assumes one unnecessary copying of data, and also creates a false impression that the transmitted information is stored in the session object. A session object is an associative object. It is assumed that it is needed in order to distinguish between what part of the information belongs to what. The data itself only exists while the SessionLayer::sendSession method is executing. Basically, I keep unpacking the network stack layers even at the session layer. This is more like ByteBuffer object contains Session object.

However, in the case of TCP, the data object will indeed be directly stored in the session object. But for the sake of generality and to reduce the number of objects created, I settled on this solution.

@Ktlo
Copy link
Contributor Author

Ktlo commented Sep 21, 2021

I tried to optimize the code as much as possible and avoid copying. I thought this might be important, since you can expect a lot of frames from all the internet cards on the server. The default implementations of the *Layer interfaces will be hidden from developers implementing the proposed interfaces. All the most difficult and unpleasant things will remain there. I tried to keep the SessionLayer interface simple enough, but I understand that an API based on InputStream, OutputStream (TCP) and BlockingQueue (UDP) would be much easier to understand and use.

Thank you for your review. Now I am confident that I am on the right track. I will try to continue working on this in the near future.

@ajh123
Copy link

ajh123 commented Jun 14, 2022

I can't seem to get the internet card sending. How do I configure it?
I can get it working, but without having a normal network card inside. How to have both?

@Ktlo
Copy link
Contributor Author

Ktlo commented Jun 25, 2022

I can't seem to get the internet card sending. How do I configure it? I can get it working, but without having a normal network card inside. How to have both?

In oc2-common.toml configuration file you need to switch internetCardEnabled to true. Also check content of deniedHosts option in the same file.

In the game, you need to insert an internet card into a computer and run the following script on this computer:

# eth0 is an interface provided by internet card
ip link set eth0 up

# Can be any subnet, but it is reasonable to assign p2p subnet address with prefix 30
ip addr add 192.168.2.1/30 dev eth0

# Assign address to the host simulated by the internet card
# This host will be gateway, obviously
ip route add default via 192.168.2.2

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants