-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Federation Mode #11
Comments
A quick update on TB5 and Link-Local IPv6. I was able to do device discovery on the package main
import (
"log"
"net"
"os"
"time"
"golang.org/x/net/icmp"
"golang.org/x/net/ipv6"
)
func main() {
interfaceName := "bridge0"
multicastAddr := "ff02::1"
// Get the network interface
ifi, err := net.InterfaceByName(interfaceName)
if err != nil {
log.Fatalf("Failed to get interface: %v", err)
}
// Create a raw ICMP listener
conn, err := icmp.ListenPacket("ip6:ipv6-icmp", "::")
if err != nil {
log.Fatalf("Failed to create ICMP listener: %v", err)
}
defer conn.Close()
// Use ipv6.PacketConn for control over the interface
p := conn.IPv6PacketConn()
if err := p.SetControlMessage(ipv6.FlagInterface, true); err != nil {
log.Fatalf("Failed to set control message: %v", err)
}
// Bind the connection to the specified interface
if err := p.SetMulticastInterface(ifi); err != nil {
log.Fatalf("Failed to set multicast interface: %v", err)
}
// Prepare the ICMP Echo Request
echoRequest := icmp.Message{
Type: ipv6.ICMPTypeEchoRequest, // ICMPv6 Echo Request
Code: 0,
Body: &icmp.Echo{
ID: os.Getpid() & 0xffff,
Seq: 1,
Data: []byte("Hello"),
},
}
// Serialize the request
echoBytes, err := echoRequest.Marshal(nil)
if err != nil {
log.Fatalf("Failed to marshal ICMP request: %v", err)
}
// Define the destination address
dstAddr := &net.UDPAddr{
IP: net.ParseIP(multicastAddr),
Zone: ifi.Name, // Attach the interface to the address
}
// Send the Echo Request
_, err = p.WriteTo(echoBytes, nil, dstAddr)
if err != nil {
log.Fatalf("Failed to send ICMP request: %v", err)
}
log.Println("ICMP Echo Request sent to", multicastAddr)
// Set a read deadline for responses
err = p.SetReadDeadline(time.Now().Add(5 * time.Second))
if err != nil {
log.Fatalf("Failed to set read deadline: %v", err)
}
// Buffer for receiving responses
respBuffer := make([]byte, 1500)
for {
n, cm, peer, err := p.ReadFrom(respBuffer)
if err != nil {
if os.IsTimeout(err) {
log.Println("Timed out waiting for responses")
break
}
log.Fatalf("Error reading ICMP response: %v", err)
}
// Parse the ICMP response
respMessage, err := icmp.ParseMessage(ipv6.ICMPTypeEchoReply.Protocol(), respBuffer[:n])
if err != nil {
log.Printf("Failed to parse ICMP message from %v: %v", peer, err)
continue
}
if respMessage.Type == ipv6.ICMPTypeEchoReply {
log.Printf("Received Echo Reply from %v (Interface Index: %d)", peer, cm.IfIndex)
}
}
} The only caveat is that it need to run as I guess we can do either a static list of IPs or have auto-discovery. It also strange that Link-Local IPv6 of Mac Minis seems not following EUI-64 practice and is not clear how it's calculated based on the MAC address. |
I guess for the federation mode we can just optimize for reads and fallback to just downloading from remote if a read proxy fails. We can configure an interface where to proxy reads to ( |
Once we'll have Thunderbolt 5 inner connection in Mac Minis we can think of running chacha instanaces on each of the nodes so they can act as a distributed cache proxy.
Let's say we have 16 Mac Minis all inner connected via Thundebolt 5 and having their own private IP address. For the chacha instance running locally we can let it know about all the 16 IP addresses and a replication factor, let's say 2 for the example.
Then upon download, the instance will see which shard this blob key belongs too and will redirect it to one of the shads randomly in case it's not this instance. In case it's this instance it will return it from the local cache or download and return from R2.
On the upload side, we can do pretty much the same or maybe even making sure all the replicas have the blob.
In this case all the persistent workers will use local instance of chacha which distributes the blobs to all the inner connected Macs.
The text was updated successfully, but these errors were encountered: