Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OSError: Unable to create link (name already exists) #18

Open
eftalgezer opened this issue Sep 4, 2022 · 0 comments
Open

OSError: Unable to create link (name already exists) #18

eftalgezer opened this issue Sep 4, 2022 · 0 comments

Comments

@eftalgezer
Copy link

eftalgezer commented Sep 4, 2022

====== Iteration 0 ======

Running SCF calculations ...
-----------------------------

converged SCF energy = -76.3545540706806
converged SCF energy = -76.3508207847105
converged SCF energy = -76.355707764332
converged SCF energy = -76.356824320776
converged SCF energy = -76.3739444533522
converged SCF energy = -76.3695047518221
converged SCF energy = -76.3694359857328
converged SCF energy = -76.3496333319949
converged SCF energy = -76.3557216068751
converged SCF energy = -76.3662805538731

Projecting onto basis ...
-----------------------------

workdir/0/pyscf.chkpt
workdir/1/pyscf.chkpt
workdir/2/pyscf.chkpt
workdir/3/pyscf.chkpt
workdir/4/pyscf.chkpt
workdir/5/pyscf.chkpt
workdir/6/pyscf.chkpt
workdir/7/pyscf.chkpt
workdir/8/pyscf.chkpt
workdir/9/pyscf.chkpt
10 systems found, adding 97a66c91908d8f76f249705362d9e536
10 systems found, adding energy
10 systems found, adding energy

Baseline accuracy
-----------------------------

{'mae': 0.05993, 'max': 0.09156, 'mean deviation': 0.0, 'rmse': 0.06635}

Fitting initial ML model ...
-----------------------------

Using symmetrizer  trace
Fitting 4 folds for each of 3 candidates, totalling 12 fits
Activation unknown, defaulting to GELU
ModuleDict(
  (X): Linear(in_features=19, out_features=1, bias=True)
)
Epoch 0 ||  Training loss : 0.737958  Validation loss : 0.000000  Learning rate: 0.001
Epoch 1000 ||  Training loss : 0.013578  Validation loss : 0.000000  Learning rate: 0.001
Epoch 2000 ||  Training loss : 0.012651  Validation loss : 0.000000  Learning rate: 0.001
Epoch 3000 ||  Training loss : 0.011192  Validation loss : 0.000000  Learning rate: 0.001
Epoch 4000 ||  Training loss : 0.009135  Validation loss : 0.000000  Learning rate: 0.001
Epoch 5000 ||  Training loss : 0.006535  Validation loss : 0.000000  Learning rate: 0.001
Epoch 6000 ||  Training loss : 0.003770  Validation loss : 0.000000  Learning rate: 0.001
Epoch 7000 ||  Training loss : 0.001574  Validation loss : 0.000000  Learning rate: 0.001
Epoch 8000 ||  Training loss : 0.000567  Validation loss : 0.000000  Learning rate: 0.001
Epoch 9000 ||  Training loss : 0.000380  Validation loss : 0.000000  Learning rate: 0.001
Epoch 10000 ||  Training loss : 0.000298  Validation loss : 0.000000  Learning rate: 0.001
Epoch 11000 ||  Training loss : 0.000238  Validation loss : 0.000000  Learning rate: 0.001
Epoch 12000 ||  Training loss : 0.000191  Validation loss : 0.000000  Learning rate: 0.001
Epoch 13000 ||  Training loss : 0.000144  Validation loss : 0.000000  Learning rate: 0.001
Epoch 14000 ||  Training loss : 0.000111  Validation loss : 0.000000  Learning rate: 0.001
Epoch 15000 ||  Training loss : 0.000086  Validation loss : 0.000000  Learning rate: 0.001
Epoch 16000 ||  Training loss : 0.000066  Validation loss : 0.000000  Learning rate: 0.001
Epoch 17000 ||  Training loss : 0.000051  Validation loss : 0.000000  Learning rate: 0.001
Epoch 18000 ||  Training loss : 0.000039  Validation loss : 0.000000  Learning rate: 0.001
Epoch 19000 ||  Training loss : 0.000101  Validation loss : 0.000000  Learning rate: 0.001
Epoch 20000 ||  Training loss : 0.000088  Validation loss : 0.000000  Learning rate: 0.001
Activation unknown, defaulting to GELU
ModuleDict(
  (X): Linear(in_features=19, out_features=1, bias=True)
)
Epoch 0 ||  Training loss : 0.688702  Validation loss : 0.000000  Learning rate: 0.001
Epoch 1000 ||  Training loss : 0.005803  Validation loss : 0.000000  Learning rate: 0.001
Epoch 2000 ||  Training loss : 0.004593  Validation loss : 0.000000  Learning rate: 0.001
Epoch 3000 ||  Training loss : 0.004208  Validation loss : 0.000000  Learning rate: 0.001
Epoch 4000 ||  Training loss : 0.004022  Validation loss : 0.000000  Learning rate: 0.001
Epoch 5000 ||  Training loss : 0.003779  Validation loss : 0.000000  Learning rate: 0.001
Epoch 6000 ||  Training loss : 0.003422  Validation loss : 0.000000  Learning rate: 0.001
Epoch 7000 ||  Training loss : 0.002935  Validation loss : 0.000000  Learning rate: 0.001
Epoch 8000 ||  Training loss : 0.002444  Validation loss : 0.000000  Learning rate: 0.001
Epoch 9000 ||  Training loss : 0.002092  Validation loss : 0.000000  Learning rate: 0.001
Epoch 10000 ||  Training loss : 0.001836  Validation loss : 0.000000  Learning rate: 0.001
Epoch 11000 ||  Training loss : 0.001651  Validation loss : 0.000000  Learning rate: 0.001
Epoch 12000 ||  Training loss : 0.001514  Validation loss : 0.000000  Learning rate: 0.001
Epoch 13000 ||  Training loss : 0.001407  Validation loss : 0.000000  Learning rate: 0.001
Epoch 14000 ||  Training loss : 0.001320  Validation loss : 0.000000  Learning rate: 0.001
Epoch 15000 ||  Training loss : 0.001247  Validation loss : 0.000000  Learning rate: 0.001
Epoch 16000 ||  Training loss : 0.001183  Validation loss : 0.000000  Learning rate: 0.001
Epoch 17000 ||  Training loss : 0.001126  Validation loss : 0.000000  Learning rate: 0.001
Epoch 18000 ||  Training loss : 0.001074  Validation loss : 0.000000  Learning rate: 0.001
Epoch 19000 ||  Training loss : 0.001024  Validation loss : 0.000000  Learning rate: 0.001
Epoch 20000 ||  Training loss : 0.000981  Validation loss : 0.000000  Learning rate: 0.001
Activation unknown, defaulting to GELU
ModuleDict(
  (X): Linear(in_features=19, out_features=1, bias=True)
)
Epoch 0 ||  Training loss : 0.172542  Validation loss : 0.000000  Learning rate: 0.001
Epoch 1000 ||  Training loss : 0.003400  Validation loss : 0.000000  Learning rate: 0.001
Epoch 2000 ||  Training loss : 0.002257  Validation loss : 0.000000  Learning rate: 0.001
Epoch 3000 ||  Training loss : 0.001851  Validation loss : 0.000000  Learning rate: 0.001
Epoch 4000 ||  Training loss : 0.001511  Validation loss : 0.000000  Learning rate: 0.001
Epoch 5000 ||  Training loss : 0.001225  Validation loss : 0.000000  Learning rate: 0.001
Epoch 6000 ||  Training loss : 0.000936  Validation loss : 0.000000  Learning rate: 0.001
Epoch 7000 ||  Training loss : 0.000696  Validation loss : 0.000000  Learning rate: 0.001
Epoch 8000 ||  Training loss : 0.000498  Validation loss : 0.000000  Learning rate: 0.001
Epoch 9000 ||  Training loss : 0.000386  Validation loss : 0.000000  Learning rate: 0.001
Epoch 10000 ||  Training loss : 0.000261  Validation loss : 0.000000  Learning rate: 0.001
Epoch 11000 ||  Training loss : 0.000219  Validation loss : 0.000000  Learning rate: 0.001
Epoch 12000 ||  Training loss : 0.000208  Validation loss : 0.000000  Learning rate: 0.001
Epoch 13000 ||  Training loss : 0.000247  Validation loss : 0.000000  Learning rate: 0.001
Epoch 14000 ||  Training loss : 0.000201  Validation loss : 0.000000  Learning rate: 0.001
Epoch 15000 ||  Training loss : 0.000199  Validation loss : 0.000000  Learning rate: 0.001
Epoch 16000 ||  Training loss : 0.001895  Validation loss : 0.000000  Learning rate: 0.001
Epoch 17000 ||  Training loss : 0.000876  Validation loss : 0.000000  Learning rate: 0.001
Epoch 18000 ||  Training loss : 0.000205  Validation loss : 0.000000  Learning rate: 0.001
Epoch 19000 ||  Training loss : 0.000184  Validation loss : 0.000000  Learning rate: 0.001
Epoch 20000 ||  Training loss : 0.000181  Validation loss : 0.000000  Learning rate: 0.001
Activation unknown, defaulting to GELU
ModuleDict(
  (X): Linear(in_features=19, out_features=1, bias=True)
)
Epoch 0 ||  Training loss : 0.276985  Validation loss : 0.000000  Learning rate: 0.001
Epoch 1000 ||  Training loss : 0.001160  Validation loss : 0.000000  Learning rate: 0.001
Epoch 2000 ||  Training loss : 0.000992  Validation loss : 0.000000  Learning rate: 0.001
Epoch 3000 ||  Training loss : 0.000924  Validation loss : 0.000000  Learning rate: 0.001
Epoch 4000 ||  Training loss : 0.000869  Validation loss : 0.000000  Learning rate: 0.001
Epoch 5000 ||  Training loss : 0.000834  Validation loss : 0.000000  Learning rate: 0.001
Epoch 6000 ||  Training loss : 0.000810  Validation loss : 0.000000  Learning rate: 0.001
Epoch 7000 ||  Training loss : 0.000787  Validation loss : 0.000000  Learning rate: 0.001
Epoch 8000 ||  Training loss : 0.000763  Validation loss : 0.000000  Learning rate: 0.001
Epoch 9000 ||  Training loss : 0.000737  Validation loss : 0.000000  Learning rate: 0.001
Epoch 10000 ||  Training loss : 0.000716  Validation loss : 0.000000  Learning rate: 0.001
Epoch 11000 ||  Training loss : 0.000682  Validation loss : 0.000000  Learning rate: 0.001
Epoch 12000 ||  Training loss : 0.000656  Validation loss : 0.000000  Learning rate: 0.001
Epoch 13000 ||  Training loss : 0.000630  Validation loss : 0.000000  Learning rate: 0.001
Epoch 14000 ||  Training loss : 0.000606  Validation loss : 0.000000  Learning rate: 0.001
Epoch 15000 ||  Training loss : 0.000584  Validation loss : 0.000000  Learning rate: 0.001
Epoch 16000 ||  Training loss : 0.000562  Validation loss : 0.000000  Learning rate: 0.001
Epoch 17000 ||  Training loss : 0.000541  Validation loss : 0.000000  Learning rate: 0.001
Epoch 18000 ||  Training loss : 0.000522  Validation loss : 0.000000  Learning rate: 0.001
Epoch 19000 ||  Training loss : 0.000504  Validation loss : 0.000000  Learning rate: 0.001
Epoch 20000 ||  Training loss : 0.000487  Validation loss : 0.000000  Learning rate: 0.001
Activation unknown, defaulting to GELU
ModuleDict(
  (X): Linear(in_features=19, out_features=1, bias=True)
)
Epoch 0 ||  Training loss : 0.624111  Validation loss : 0.000000  Learning rate: 0.001
Epoch 1000 ||  Training loss : 0.006432  Validation loss : 0.000000  Learning rate: 0.001
Epoch 2000 ||  Training loss : 0.005881  Validation loss : 0.000000  Learning rate: 0.001
Epoch 3000 ||  Training loss : 0.005859  Validation loss : 0.000000  Learning rate: 0.001
Epoch 4000 ||  Training loss : 0.005859  Validation loss : 0.000000  Learning rate: 0.001
Epoch 5000 ||  Training loss : 0.005859  Validation loss : 0.000000  Learning rate: 0.001
Epoch 6000 ||  Training loss : 0.005859  Validation loss : 0.000000  Learning rate: 0.001
Epoch 7000 ||  Training loss : 0.005859  Validation loss : 0.000000  Learning rate: 0.001
Epoch 8000 ||  Training loss : 0.005859  Validation loss : 0.000000  Learning rate: 0.001
Epoch 9000 ||  Training loss : 0.005859  Validation loss : 0.000000  Learning rate: 0.001
Epoch 10000 ||  Training loss : 0.005859  Validation loss : 0.000000  Learning rate: 0.001
Epoch 11000 ||  Training loss : 0.005859  Validation loss : 0.000000  Learning rate: 0.001
Epoch 12000 ||  Training loss : 0.005859  Validation loss : 0.000000  Learning rate: 0.001
Epoch 13000 ||  Training loss : 0.005859  Validation loss : 0.000000  Learning rate: 0.001
Epoch 14000 ||  Training loss : 0.005858  Validation loss : 0.000000  Learning rate: 0.001
Epoch 15000 ||  Training loss : 0.005857  Validation loss : 0.000000  Learning rate: 0.001
Epoch 16000 ||  Training loss : 0.005869  Validation loss : 0.000000  Learning rate: 0.001
Epoch 17000 ||  Training loss : 0.005868  Validation loss : 0.000000  Learning rate: 0.001
Epoch 18000 ||  Training loss : 0.005859  Validation loss : 0.000000  Learning rate: 0.001
Epoch 19000 ||  Training loss : 0.005864  Validation loss : 0.000000  Learning rate: 0.001
Epoch 20000 ||  Training loss : 0.005861  Validation loss : 0.000000  Learning rate: 0.001
Activation unknown, defaulting to GELU
ModuleDict(
  (X): Linear(in_features=19, out_features=1, bias=True)
)
Epoch 0 ||  Training loss : 1.096901  Validation loss : 0.000000  Learning rate: 0.001
Epoch 1000 ||  Training loss : 0.013148  Validation loss : 0.000000  Learning rate: 0.001
Epoch 2000 ||  Training loss : 0.005045  Validation loss : 0.000000  Learning rate: 0.001
Epoch 3000 ||  Training loss : 0.007128  Validation loss : 0.000000  Learning rate: 0.001
Epoch 4000 ||  Training loss : 0.007204  Validation loss : 0.000000  Learning rate: 0.001
Epoch 5000 ||  Training loss : 0.007108  Validation loss : 0.000000  Learning rate: 0.001
Epoch 6000 ||  Training loss : 0.007108  Validation loss : 0.000000  Learning rate: 0.001
Epoch 7000 ||  Training loss : 0.007108  Validation loss : 0.000000  Learning rate: 0.001
Epoch 8000 ||  Training loss : 0.007108  Validation loss : 0.000000  Learning rate: 0.001
Epoch 9000 ||  Training loss : 0.007108  Validation loss : 0.000000  Learning rate: 0.001
Epoch 10000 ||  Training loss : 0.007108  Validation loss : 0.000000  Learning rate: 0.001
Epoch 11000 ||  Training loss : 0.007108  Validation loss : 0.000000  Learning rate: 0.001
Epoch 12000 ||  Training loss : 0.007108  Validation loss : 0.000000  Learning rate: 0.001
Epoch 00014: reducing learning rate of group 0 to 1.0000e-04.
Epoch 13000 ||  Training loss : 0.007108  Validation loss : 0.000000  Learning rate: 0.001
Epoch 14000 ||  Training loss : 0.007108  Validation loss : 0.000000  Learning rate: 0.0001
Epoch 15000 ||  Training loss : 0.007108  Validation loss : 0.000000  Learning rate: 0.0001
Epoch 16000 ||  Training loss : 0.007108  Validation loss : 0.000000  Learning rate: 0.0001
Epoch 17000 ||  Training loss : 0.007108  Validation loss : 0.000000  Learning rate: 0.0001
Epoch 18000 ||  Training loss : 0.007109  Validation loss : 0.000000  Learning rate: 0.0001
Epoch 19000 ||  Training loss : 0.007109  Validation loss : 0.000000  Learning rate: 0.0001
Epoch 20000 ||  Training loss : 0.007108  Validation loss : 0.000000  Learning rate: 0.0001
Activation unknown, defaulting to GELU
ModuleDict(
  (X): Linear(in_features=19, out_features=1, bias=True)
)
Epoch 0 ||  Training loss : 0.441285  Validation loss : 0.000000  Learning rate: 0.001
Epoch 1000 ||  Training loss : 0.006409  Validation loss : 0.000000  Learning rate: 0.001
Epoch 2000 ||  Training loss : 0.006472  Validation loss : 0.000000  Learning rate: 0.001
Epoch 3000 ||  Training loss : 0.006472  Validation loss : 0.000000  Learning rate: 0.001
Epoch 4000 ||  Training loss : 0.006472  Validation loss : 0.000000  Learning rate: 0.001
Epoch 5000 ||  Training loss : 0.006473  Validation loss : 0.000000  Learning rate: 0.001
Epoch 6000 ||  Training loss : 0.006472  Validation loss : 0.000000  Learning rate: 0.001
Epoch 7000 ||  Training loss : 0.006472  Validation loss : 0.000000  Learning rate: 0.001
Epoch 8000 ||  Training loss : 0.006472  Validation loss : 0.000000  Learning rate: 0.001
Epoch 9000 ||  Training loss : 0.006472  Validation loss : 0.000000  Learning rate: 0.001
Epoch 10000 ||  Training loss : 0.006472  Validation loss : 0.000000  Learning rate: 0.001
Epoch 11000 ||  Training loss : 0.006472  Validation loss : 0.000000  Learning rate: 0.001
Epoch 00013: reducing learning rate of group 0 to 1.0000e-04.
Epoch 12000 ||  Training loss : 0.006475  Validation loss : 0.000000  Learning rate: 0.001
Epoch 13000 ||  Training loss : 0.006472  Validation loss : 0.000000  Learning rate: 0.0001
Epoch 14000 ||  Training loss : 0.006472  Validation loss : 0.000000  Learning rate: 0.0001
Epoch 15000 ||  Training loss : 0.006472  Validation loss : 0.000000  Learning rate: 0.0001
Epoch 16000 ||  Training loss : 0.006472  Validation loss : 0.000000  Learning rate: 0.0001
Epoch 17000 ||  Training loss : 0.006471  Validation loss : 0.000000  Learning rate: 0.0001
Epoch 18000 ||  Training loss : 0.006472  Validation loss : 0.000000  Learning rate: 0.0001
Epoch 19000 ||  Training loss : 0.006472  Validation loss : 0.000000  Learning rate: 0.0001
Epoch 20000 ||  Training loss : 0.006472  Validation loss : 0.000000  Learning rate: 0.0001
Activation unknown, defaulting to GELU
ModuleDict(
  (X): Linear(in_features=19, out_features=1, bias=True)
)
Epoch 0 ||  Training loss : 0.706089  Validation loss : 0.000000  Learning rate: 0.001
Epoch 1000 ||  Training loss : 0.009280  Validation loss : 0.000000  Learning rate: 0.001
Epoch 2000 ||  Training loss : 0.006735  Validation loss : 0.000000  Learning rate: 0.001
Epoch 3000 ||  Training loss : 0.006113  Validation loss : 0.000000  Learning rate: 0.001
Epoch 4000 ||  Training loss : 0.005982  Validation loss : 0.000000  Learning rate: 0.001
Epoch 5000 ||  Training loss : 0.005973  Validation loss : 0.000000  Learning rate: 0.001
Epoch 6000 ||  Training loss : 0.005973  Validation loss : 0.000000  Learning rate: 0.001
Epoch 7000 ||  Training loss : 0.005973  Validation loss : 0.000000  Learning rate: 0.001
Epoch 8000 ||  Training loss : 0.005973  Validation loss : 0.000000  Learning rate: 0.001
Epoch 9000 ||  Training loss : 0.005973  Validation loss : 0.000000  Learning rate: 0.001
Epoch 10000 ||  Training loss : 0.005973  Validation loss : 0.000000  Learning rate: 0.001
Epoch 11000 ||  Training loss : 0.005973  Validation loss : 0.000000  Learning rate: 0.001
Epoch 12000 ||  Training loss : 0.005990  Validation loss : 0.000000  Learning rate: 0.001
Epoch 13000 ||  Training loss : 0.005973  Validation loss : 0.000000  Learning rate: 0.001
Epoch 14000 ||  Training loss : 0.005973  Validation loss : 0.000000  Learning rate: 0.001
Epoch 15000 ||  Training loss : 0.005973  Validation loss : 0.000000  Learning rate: 0.001
Epoch 00017: reducing learning rate of group 0 to 1.0000e-04.
Epoch 16000 ||  Training loss : 0.005973  Validation loss : 0.000000  Learning rate: 0.001
Epoch 17000 ||  Training loss : 0.005973  Validation loss : 0.000000  Learning rate: 0.0001
Epoch 18000 ||  Training loss : 0.005973  Validation loss : 0.000000  Learning rate: 0.0001
Epoch 19000 ||  Training loss : 0.005973  Validation loss : 0.000000  Learning rate: 0.0001
Epoch 20000 ||  Training loss : 0.005973  Validation loss : 0.000000  Learning rate: 0.0001
Activation unknown, defaulting to GELU
ModuleDict(
  (X): Linear(in_features=19, out_features=1, bias=True)
)
Epoch 0 ||  Training loss : 0.692270  Validation loss : 0.000000  Learning rate: 0.001
Epoch 1000 ||  Training loss : 0.003213  Validation loss : 0.000000  Learning rate: 0.001
Epoch 2000 ||  Training loss : 0.001989  Validation loss : 0.000000  Learning rate: 0.001
Epoch 3000 ||  Training loss : 0.001688  Validation loss : 0.000000  Learning rate: 0.001
Epoch 4000 ||  Training loss : 0.001691  Validation loss : 0.000000  Learning rate: 0.001
Epoch 5000 ||  Training loss : 0.001728  Validation loss : 0.000000  Learning rate: 0.001
Epoch 6000 ||  Training loss : 0.001731  Validation loss : 0.000000  Learning rate: 0.001
Epoch 7000 ||  Training loss : 0.001728  Validation loss : 0.000000  Learning rate: 0.001
Epoch 8000 ||  Training loss : 0.001727  Validation loss : 0.000000  Learning rate: 0.001
Epoch 9000 ||  Training loss : 0.001727  Validation loss : 0.000000  Learning rate: 0.001
Epoch 10000 ||  Training loss : 0.001727  Validation loss : 0.000000  Learning rate: 0.001
Epoch 11000 ||  Training loss : 0.001727  Validation loss : 0.000000  Learning rate: 0.001
Epoch 12000 ||  Training loss : 0.001727  Validation loss : 0.000000  Learning rate: 0.001
Epoch 13000 ||  Training loss : 0.001727  Validation loss : 0.000000  Learning rate: 0.001
Epoch 00015: reducing learning rate of group 0 to 1.0000e-04.
Epoch 14000 ||  Training loss : 0.001727  Validation loss : 0.000000  Learning rate: 0.001
Epoch 15000 ||  Training loss : 0.001727  Validation loss : 0.000000  Learning rate: 0.0001
Epoch 16000 ||  Training loss : 0.001727  Validation loss : 0.000000  Learning rate: 0.0001
Epoch 17000 ||  Training loss : 0.001727  Validation loss : 0.000000  Learning rate: 0.0001
Epoch 18000 ||  Training loss : 0.001727  Validation loss : 0.000000  Learning rate: 0.0001
Epoch 19000 ||  Training loss : 0.001727  Validation loss : 0.000000  Learning rate: 0.0001
Epoch 20000 ||  Training loss : 0.001727  Validation loss : 0.000000  Learning rate: 0.0001
Activation unknown, defaulting to GELU
ModuleDict(
  (X): Linear(in_features=19, out_features=1, bias=True)
)
Epoch 0 ||  Training loss : 0.419515  Validation loss : 0.000000  Learning rate: 0.001
Epoch 1000 ||  Training loss : 0.006821  Validation loss : 0.000000  Learning rate: 0.001
Epoch 2000 ||  Training loss : 0.003581  Validation loss : 0.000000  Learning rate: 0.001
Epoch 3000 ||  Training loss : 0.002444  Validation loss : 0.000000  Learning rate: 0.001
Epoch 4000 ||  Training loss : 0.002266  Validation loss : 0.000000  Learning rate: 0.001
Epoch 5000 ||  Training loss : 0.002343  Validation loss : 0.000000  Learning rate: 0.001
Epoch 6000 ||  Training loss : 0.002457  Validation loss : 0.000000  Learning rate: 0.001
Epoch 7000 ||  Training loss : 0.002566  Validation loss : 0.000000  Learning rate: 0.001
Epoch 8000 ||  Training loss : 0.002680  Validation loss : 0.000000  Learning rate: 0.001
Epoch 9000 ||  Training loss : 0.002774  Validation loss : 0.000000  Learning rate: 0.001
Epoch 10000 ||  Training loss : 0.002822  Validation loss : 0.000000  Learning rate: 0.001
Epoch 11000 ||  Training loss : 0.002834  Validation loss : 0.000000  Learning rate: 0.001
Epoch 12000 ||  Training loss : 0.002836  Validation loss : 0.000000  Learning rate: 0.001
Epoch 13000 ||  Training loss : 0.002836  Validation loss : 0.000000  Learning rate: 0.001
Epoch 14000 ||  Training loss : 0.002837  Validation loss : 0.000000  Learning rate: 0.001
Epoch 00016: reducing learning rate of group 0 to 1.0000e-04.
Epoch 15000 ||  Training loss : 0.002837  Validation loss : 0.000000  Learning rate: 0.001
Epoch 16000 ||  Training loss : 0.002837  Validation loss : 0.000000  Learning rate: 0.0001
Epoch 17000 ||  Training loss : 0.002837  Validation loss : 0.000000  Learning rate: 0.0001
Epoch 18000 ||  Training loss : 0.002837  Validation loss : 0.000000  Learning rate: 0.0001
Epoch 19000 ||  Training loss : 0.002837  Validation loss : 0.000000  Learning rate: 0.0001
Epoch 20000 ||  Training loss : 0.002837  Validation loss : 0.000000  Learning rate: 0.0001
Activation unknown, defaulting to GELU
ModuleDict(
  (X): Linear(in_features=19, out_features=1, bias=True)
)
Epoch 0 ||  Training loss : 1.116178  Validation loss : 0.000000  Learning rate: 0.001
Epoch 1000 ||  Training loss : 0.017524  Validation loss : 0.000000  Learning rate: 0.001
Epoch 2000 ||  Training loss : 0.010454  Validation loss : 0.000000  Learning rate: 0.001
Epoch 3000 ||  Training loss : 0.009555  Validation loss : 0.000000  Learning rate: 0.001
Epoch 4000 ||  Training loss : 0.008318  Validation loss : 0.000000  Learning rate: 0.001
Epoch 5000 ||  Training loss : 0.006758  Validation loss : 0.000000  Learning rate: 0.001
Epoch 6000 ||  Training loss : 0.005142  Validation loss : 0.000000  Learning rate: 0.001
Epoch 7000 ||  Training loss : 0.003908  Validation loss : 0.000000  Learning rate: 0.001
Epoch 8000 ||  Training loss : 0.003240  Validation loss : 0.000000  Learning rate: 0.001
Epoch 9000 ||  Training loss : 0.002890  Validation loss : 0.000000  Learning rate: 0.001
Epoch 10000 ||  Training loss : 0.002633  Validation loss : 0.000000  Learning rate: 0.001
Epoch 11000 ||  Training loss : 0.002399  Validation loss : 0.000000  Learning rate: 0.001
Epoch 12000 ||  Training loss : 0.002211  Validation loss : 0.000000  Learning rate: 0.001
Epoch 13000 ||  Training loss : 0.002099  Validation loss : 0.000000  Learning rate: 0.001
Epoch 14000 ||  Training loss : 0.002061  Validation loss : 0.000000  Learning rate: 0.001
Epoch 15000 ||  Training loss : 0.002053  Validation loss : 0.000000  Learning rate: 0.001
Epoch 16000 ||  Training loss : 0.002051  Validation loss : 0.000000  Learning rate: 0.001
Epoch 17000 ||  Training loss : 0.002051  Validation loss : 0.000000  Learning rate: 0.001
Epoch 18000 ||  Training loss : 0.002051  Validation loss : 0.000000  Learning rate: 0.001
Epoch 19000 ||  Training loss : 0.002051  Validation loss : 0.000000  Learning rate: 0.001
Epoch 20000 ||  Training loss : 0.002051  Validation loss : 0.000000  Learning rate: 0.001
Activation unknown, defaulting to GELU
ModuleDict(
  (X): Linear(in_features=19, out_features=1, bias=True)
)
Epoch 0 ||  Training loss : 0.585081  Validation loss : 0.000000  Learning rate: 0.001
Epoch 1000 ||  Training loss : 0.008857  Validation loss : 0.000000  Learning rate: 0.001
Epoch 2000 ||  Training loss : 0.005610  Validation loss : 0.000000  Learning rate: 0.001
Epoch 3000 ||  Training loss : 0.004271  Validation loss : 0.000000  Learning rate: 0.001
Epoch 4000 ||  Training loss : 0.003184  Validation loss : 0.000000  Learning rate: 0.001
Epoch 5000 ||  Training loss : 0.002515  Validation loss : 0.000000  Learning rate: 0.001
Epoch 6000 ||  Training loss : 0.002223  Validation loss : 0.000000  Learning rate: 0.001
Epoch 7000 ||  Training loss : 0.002111  Validation loss : 0.000000  Learning rate: 0.001
Epoch 8000 ||  Training loss : 0.002070  Validation loss : 0.000000  Learning rate: 0.001
Epoch 9000 ||  Training loss : 0.002063  Validation loss : 0.000000  Learning rate: 0.001
Epoch 10000 ||  Training loss : 0.002062  Validation loss : 0.000000  Learning rate: 0.001
Epoch 11000 ||  Training loss : 0.002062  Validation loss : 0.000000  Learning rate: 0.001
Epoch 12000 ||  Training loss : 0.002062  Validation loss : 0.000000  Learning rate: 0.001
Epoch 13000 ||  Training loss : 0.002062  Validation loss : 0.000000  Learning rate: 0.001
Epoch 14000 ||  Training loss : 0.002062 Overwritten attributes  get_veff  of <class 'pyscf.dft.rks.RKS'>
 Validation loss : 0.000000  Learning rate: 0.001
Epoch 15000 ||  Training loss : 0.002062  Validation loss : 0.000000  Learning rate: 0.001
Epoch 16000 ||  Training loss : 0.002062  Validation loss : 0.000000  Learning rate: 0.001
Epoch 17000 ||  Training loss : 0.002109  Validation loss : 0.000000  Learning rate: 0.001
Epoch 18000 ||  Training loss : 0.003392  Validation loss : 0.000000  Learning rate: 0.001
Epoch 19000 ||  Training loss : 0.002062  Validation loss : 0.000000  Learning rate: 0.001
Epoch 20000 ||  Training loss : 0.002062  Validation loss : 0.000000  Learning rate: 0.001
Activation unknown, defaulting to GELU
ModuleDict(
  (X): Linear(in_features=19, out_features=1, bias=True)
)
Epoch 0 ||  Training loss : 0.401415  Validation loss : 0.000000  Learning rate: 0.001
Epoch 1000 ||  Training loss : 0.004100  Validation loss : 0.000000  Learning rate: 0.001
Epoch 2000 ||  Training loss : 0.003189  Validation loss : 0.000000  Learning rate: 0.001
Epoch 3000 ||  Training loss : 0.003003  Validation loss : 0.000000  Learning rate: 0.001
Epoch 4000 ||  Training loss : 0.002946  Validation loss : 0.000000  Learning rate: 0.001
Epoch 5000 ||  Training loss : 0.002949  Validation loss : 0.000000  Learning rate: 0.001
Epoch 6000 ||  Training loss : 0.002954  Validation loss : 0.000000  Learning rate: 0.001
Epoch 7000 ||  Training loss : 0.002955  Validation loss : 0.000000  Learning rate: 0.001
Epoch 8000 ||  Training loss : 0.002957  Validation loss : 0.000000  Learning rate: 0.001
Epoch 9000 ||  Training loss : 0.002956  Validation loss : 0.000000  Learning rate: 0.001
Epoch 10000 ||  Training loss : 0.002957  Validation loss : 0.000000  Learning rate: 0.001
Epoch 11000 ||  Training loss : 0.002956  Validation loss : 0.000000  Learning rate: 0.001
Epoch 12000 ||  Training loss : 0.002956  Validation loss : 0.000000  Learning rate: 0.001
Epoch 13000 ||  Training loss : 0.002956  Validation loss : 0.000000  Learning rate: 0.001
Epoch 14000 ||  Training loss : 0.002956  Validation loss : 0.000000  Learning rate: 0.001
Epoch 00016: reducing learning rate of group 0 to 1.0000e-04.
Epoch 15000 ||  Training loss : 0.002956  Validation loss : 0.000000  Learning rate: 0.001
Epoch 16000 ||  Training loss : 0.002956  Validation loss : 0.000000  Learning rate: 0.0001
Epoch 17000 ||  Training loss : 0.002956  Validation loss : 0.000000  Learning rate: 0.0001
Epoch 18000 ||  Training loss : 0.002956  Validation loss : 0.000000  Learning rate: 0.0001
Epoch 19000 ||  Training loss : 0.002956  Validation loss : 0.000000  Learning rate: 0.0001
Epoch 20000 ||  Training loss : 0.002956  Validation loss : 0.000000  Learning rate: 0.0001


====== Iteration 1 ======
Using symmetrizer  trace
Success!

Running SCF calculations ...
-----------------------------

NeuralXC: Loading model from /home/egezer/neuralxc/examples/quickstart/sc/model_it1.jit
NeuralXC: Model successfully loaded
converged SCF energy = -76.3529189373852
NeuralXC: Loading model from /home/egezer/neuralxc/examples/quickstart/sc/model_it1.jit
NeuralXC: Model successfully loaded
converged SCF energy = -76.3482785744807
NeuralXC: Loading model from /home/egezer/neuralxc/examples/quickstart/sc/model_it1.jit
NeuralXC: Model successfully loaded
converged SCF energy = -76.3557948619506
NeuralXC: Loading model from /home/egezer/neuralxc/examples/quickstart/sc/model_it1.jit
NeuralXC: Model successfully loaded
converged SCF energy = -76.3566029382129
NeuralXC: Loading model from /home/egezer/neuralxc/examples/quickstart/sc/model_it1.jit
NeuralXC: Model successfully loaded
converged SCF energy = -76.3773105081291
NeuralXC: Loading model from /home/egezer/neuralxc/examples/quickstart/sc/model_it1.jit
NeuralXC: Model successfully loaded
converged SCF energy = -76.3717756096372
NeuralXC: Loading model from /home/egezer/neuralxc/examples/quickstart/sc/model_it1.jit
NeuralXC: Model successfully loaded
converged SCF energy = -76.3728823731244
NeuralXC: Loading model from /home/egezer/neuralxc/examples/quickstart/sc/model_it1.jit
NeuralXC: Model successfully loaded
converged SCF energy = -76.3468435329547
NeuralXC: Loading model from /home/egezer/neuralxc/examples/quickstart/sc/model_it1.jit
NeuralXC: Model successfully loaded
converged SCF energy = -76.355468924666
NeuralXC: Loading model from /home/egezer/neuralxc/examples/quickstart/sc/model_it1.jit
NeuralXC: Model successfully loaded
converged SCF energy = -76.3696142171586

Projecting onto basis...
-----------------------------

workdir/0/pyscf.chkpt
workdir/1/pyscf.chkpt
workdir/2/pyscf.chkpt
workdir/3/pyscf.chkpt
workdir/4/pyscf.chkpt
workdir/5/pyscf.chkpt
workdir/6/pyscf.chkpt
workdir/7/pyscf.chkpt
workdir/8/pyscf.chkpt
workdir/9/pyscf.chkpt
10 systems found, adding 97a66c91908d8f76f249705362d9e536
Traceback (most recent call last):
  File "/home/egezer/.local/bin/neuralxc", line 7, in <module>
    exec(compile(f.read(), __file__, 'exec'))
  File "/home/egezer/neuralxc/bin/neuralxc", line 240, in <module>
    func(**args_dict)
  File "/home/egezer/neuralxc/neuralxc/drivers/model.py", line 266, in sc_driver
    pre_driver(
  File "/home/egezer/neuralxc/neuralxc/drivers/other.py", line 210, in pre_driver
    add_data_driver(hdf5=file, system=system, method=method, density=filename, add=[], traj=xyz, override=True)
  File "/home/egezer/neuralxc/neuralxc/drivers/data.py", line 81, in add_data_driver
    obs(observable, zero)
  File "/home/egezer/neuralxc/neuralxc/drivers/data.py", line 74, in obs
    add_density((density.split('/')[-1]).split('.')[0], file, data, system, method, override)
  File "/home/egezer/neuralxc/neuralxc/datastructures/hdf5.py", line 19, in add_density
    return add_data(key, *args, **kwargs)
  File "/home/egezer/neuralxc/neuralxc/datastructures/hdf5.py", line 97, in add_data
    create_dataset()
  File "/home/egezer/neuralxc/neuralxc/datastructures/hdf5.py", line 94, in create_dataset
    cg.create_dataset(which, data=data)
  File "/home/egezer/.local/lib/python3.10/site-packages/h5py/_hl/group.py", line 139, in create_dataset
    self[name] = dset
  File "/home/egezer/.local/lib/python3.10/site-packages/h5py/_hl/group.py", line 371, in __setitem__
    h5o.link(obj.id, self.id, name, lcpl=lcpl, lapl=self._lapl)
  File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
  File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
  File "h5py/h5o.pyx", line 202, in h5py.h5o.link
OSError: Unable to create link (name already exists)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant