Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

btd inversion to negf.py #106

Open
wants to merge 20 commits into
base: main
Choose a base branch
from
Open

btd inversion to negf.py #106

wants to merge 20 commits into from

Conversation

AleksBL
Copy link

@AleksBL AleksBL commented Sep 10, 2021

Added the BTD inversion to NEGF.py
Edit: forgot to add the tbt flag to the _G calls in the NEGF class, im a githubnoob and it wont allow be to add this for some reason.....

@tfrederiksen
Copy link
Member

Added the BTD inversion to NEGF.py

Thanks for the PR, it looks great!

Edit: forgot to add the tbt flag to the _G calls in the NEGF class, im a githubnoob and it wont allow be to add this for some reason.....

You can keep pushing commits to your branch and they will appear here.

@AleksBL
Copy link
Author

AleksBL commented Sep 11, 2021

I think i have cloned a wrong version of the hubbard code, the rest of the code still has the energy-loops, so this code doesnt make sense right now

@sofiasanz
Copy link
Member

Hi @AleksBL, thanks for the PR! The changes I did to the code last week are in another branch (#105). I haven't merged yet that branch since some adjustments needed to be considered.

@AleksBL
Copy link
Author

AleksBL commented Oct 2, 2021

Added some more stuff

@AleksBL
Copy link
Author

AleksBL commented Oct 2, 2021

I dont understand, is this related to the "batching" of the energy-points? :)

@zerothi
Copy link
Collaborator

zerothi commented Oct 2, 2021

I dont understand, is this related to the "batching" of the energy-points? :)

Yes, but perhaps I misunderstood its purpose, I just saw you did array_split, but to me it seems better to define number og energy points. It is just a comment, you decide :)

@AleksBL
Copy link
Author

AleksBL commented Oct 2, 2021

Ahh that must be Sofias doing, i didnt know this was in there already! :)

@sofiasanz
Copy link
Member

Yes I added the array_split method to avoid building too large matrices and do it rather in "batches". This is only done for the NEQ integrals since is the only place where we need in principle not only the diagonal of the Green's function. But actually we would only need the matrix blocks involving the indices that map the electrodes in the device so maybe the array_split is even not necessary there...

@zerothi
Copy link
Collaborator

zerothi commented Oct 4, 2021

Yes I added the array_split method to avoid building too large matrices and do it rather in "batches". This is only done for the NEQ integrals since is the only place where we need in principle not only the diagonal of the Green's function. But actually we would only need the matrix blocks involving the indices that map the electrodes in the device so maybe the array_split is even not necessary there...

The way I think of this is that you also build up the entire zS - H - sum(Sigma) for each energy point. I.e. the stored memory elements becomes quite high. Is this wrong?
My thought is that at some time later you'll try this out on a larger system, and then it may be much easier to calculate memory requirement from number of energy points, rather than number of blocks.

@AleksBL
Copy link
Author

AleksBL commented Oct 4, 2021

Yes zS - H - sum(Sigma) is build for all the energy-points given, but I guess it only stores the A_i, B_i and C_i blocks so this only scales linearly with n_orbitals and linearly with E-points... I guess having call the matrix operations on these 4-dim arrays are the tradeoff of having to do this in python if we still want some speed? :)

@zerothi
Copy link
Collaborator

zerothi commented Oct 4, 2021

Yes zS - H - sum(Sigma) is build for all the energy-points given, but I guess it only stores the A_i, B_i and C_i blocks so this only scales linearly with n_orbitals and linearly with E-points... I guess having call the matrix operations on these 4-dim arrays are the tradeoff of having to do this in python if we still want some speed? :)

I think the speed only boils down to being important when the sizes of the blocks are small in which case the overhead becomes high. For larger matrices/blocks (say 400) I wouldn't suspect much of an overhead. This is about how a user would use the routines, not so much about efficiency (which users can control with the parameter).

@AleksBL
Copy link
Author

AleksBL commented Oct 4, 2021

Just tried to see how far i could push my laptop:
A 20x20 block matrix with blocks with random shapes in between (200,200) and (300, 300), yielding a (1, 100, 5105, 5105) shaped array in the end takes up about 7GB of RAM when all the stuff used to build it has been cleared from memory. The equivalent numpy array would take up 38GB of space

@sofiasanz
Copy link
Member

OK that seems undoable if we want to run calculations locally in our laptops...

@AleksBL
Copy link
Author

AleksBL commented Oct 30, 2021

Hi, I have something that is partially working right now on the graphene strip, but im encountering an error where the self-energy parsed to the _G function has shape "(4,)" instead of the usual (4,4). Its not really a problem to tell the code what to do when it gets this parsed, but is it on purpose that a self-energy array with this shape is given?

@sofiasanz
Copy link
Member

Hi @AleksBL which script are you using to test it? I for instance checked test-hubbard-open.py and the shape of the self-energy matrix is (4,4), not (4,) when passed into the _G function... I'm not sure why are you getting that, but you are right in any case it should be a matrix form not a vector.

@sofiasanz
Copy link
Member

Hi @AleksBL I think you were right, and there was an error in the shape of the nested lists that contain the self-energies. I fixed it in commit 8c0113a in the branch green-matrix, thank you for bringing this issue up :-).

@AleksBL
Copy link
Author

AleksBL commented Nov 10, 2021

Okay cool, ill get back to it soon :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants