Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cellular Sheaf Laplacians #22

Open
wants to merge 20 commits into
base: main
Choose a base branch
from
Open

Cellular Sheaf Laplacians #22

wants to merge 20 commits into from

Conversation

tylerhanks
Copy link
Collaborator

This PR aims to implement both distributed and shared memory cellular sheaves and their laplacian update dynamics in both multithreaded and distributed computing environments. Currently just using inbuilt Julia threads and distributed capabilities with channels for communication.

(This branch was created from Sam's cellular sheaf branch so also includes a bunch of his code)

@jpfairbanks
Copy link
Member

jpfairbanks commented Dec 18, 2024

If instead of computing

    x_new = x_old - step_size*delta_x

    for (n, rm) in node.neighbors
        put!(node.out_channels[n], rm*x_new)
    end

    node.x = x_new

you just return delta_x, you could pass this to any sciml ODE solver to do adaptive time stepping on the gradient flow.

Since you are storing the solutions in shared memory multithreading, you don't even have to do anything fancy for integrating the software.

You just have to wrap your laplacian step in the sciml interface:

function local_laplacian_step!(node, step_size)
    x_old = node.x
    delta_x = zeros(node.dimension)

    for (n, rm) in node.neighbors
        outgoing_edge_val = rm*x_old
        incoming_edge_val = take!(node.in_channels[n])
        delta_x += rm'*(outgoing_edge_val - incoming_edge_val)
    end
    x_new = x_old - step_size*delta_x

    for (n, rm) in node.neighbors
        put!(node.out_channels[n], rm*x_new)
    end

    node.x = x_new
end

function laplacian_step!(nodes, step_size::Float32)
    Threads.@threads for node in nodes
        local_laplacian_step!(node, step_size)
    end
end

Needs to become something like:

function local_laplacian_step!(du, u, node)
    x_old = u[node.id]
    delta_x = du[node.id]
    zeros!(delta_x)

    for (n, rm) in node.neighbors
        outgoing_edge_val = rm*x_old
        incoming_edge_val = take!(node.in_channels[n])
        delta_x += rm'*(outgoing_edge_val - incoming_edge_val)
    end

    for (n, rm) in node.neighbors
        put!(node.out_channels[n], rm*x_new)
    end
    return du[node.id]
end

function laplacian_differential!(du, u, p, t)
    Threads.@threads for node in nodes
         local_laplacian_step!(du, u, node)
    end
end

soln = solve(ODEProblem(laplacian_differential, u0, p, t))

we need to use RecursiveArrayTools.jl or something like it to access subarrays efficiently. https://docs.sciml.ai/RecursiveArrayTools/stable/array_types/#RecursiveArrayTools.ArrayPartition

docs are failing on the templated literate build
@jpfairbanks
Copy link
Member

Docs are building now. I would feed ready to merge this if you wanted to add additional issues for improving the performance of the multithread and distributed versions.

@tylerhanks
Copy link
Collaborator Author

Docs are building now. I would feed ready to merge this if you wanted to add additional issues for improving the performance of the multithread and distributed versions.

You got the docs to build finally amazing!! I will wrap the new stuff in modules and merge this and make issues.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants