Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error converting symbolic tensor object to numpy while running RELERNN_TRAIN #60

Open
brooklynnscott00 opened this issue Dec 19, 2024 · 0 comments

Comments

@brooklynnscott00
Copy link

brooklynnscott00 commented Dec 19, 2024

While running RELERNN_TRAIN, I ran into the following error which appears to be the result of a failure to convert a tensor object to a numpy array. I ran into this after a fresh install of a conda environment following the same versions of dependencies specified in the documentation (tensorflow/2.2.0, cudatoolkit/10.1.243, and cudnn/7.6.5). Any help would be greatly appreciated!

2024-12-19 09:15:57.795018: I tensorflow/core/platform/cpu_feature_guard.cc:143] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA
2024-12-19 09:15:57.871059: I tensorflow/core/platform/profile_utils/cpu_utils.cc:102] CPU Frequency: 2944210000 Hz
2024-12-19 09:15:57.871375: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x555559e7dcf0 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2024-12-19 09:15:57.871560: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): Host, Default Version
2024-12-19 09:15:57.884792: I tensorflow/core/common_runtime/process_util.cc:147] Creating new thread pool with default inter op setting: 2. Tune using inter_op_parallelism_threads for best performance.
2024-12-19 09:16:28.739187: I tensorflow/core/common_runtime/process_util.cc:147] Creating new thread pool with default inter op setting: 2. Tune using inter_op_parallelism_threads for best performance.
Traceback (most recent call last):
  File "/home/brscott4/.conda/envs/relernn/bin/ReLERNN_TRAIN", line 130, in <module>
    main()
  File "/home/brscott4/.conda/envs/relernn/bin/ReLERNN_TRAIN", line 109, in main
    runModels(ModelFuncPointer=GRU_TUNED84,
  File "/home/brscott4/.conda/envs/relernn/lib/python3.8/site-packages/ReLERNN/helpers.py", line 344, in runModels
    model = ModelFuncPointer(x,y)
  File "/home/brscott4/.conda/envs/relernn/lib/python3.8/site-packages/ReLERNN/networks.py", line 19, in GRU_TUNED84
    model = layers.Bidirectional(layers.GRU(84,return_sequences=False))(genotype_inputs)
  File "/home/brscott4/.conda/envs/relernn/lib/python3.8/site-packages/tensorflow/python/keras/layers/wrappers.py", line 531, in __call__
    return super(Bidirectional, self).__call__(inputs, **kwargs)
  File "/home/brscott4/.conda/envs/relernn/lib/python3.8/site-packages/tensorflow/python/keras/engine/base_layer.py", line 922, in __call__
    outputs = call_fn(cast_inputs, *args, **kwargs)
  File "/home/brscott4/.conda/envs/relernn/lib/python3.8/site-packages/tensorflow/python/keras/layers/wrappers.py", line 644, in call
    y = self.forward_layer(forward_inputs,
  File "/home/brscott4/.conda/envs/relernn/lib/python3.8/site-packages/tensorflow/python/keras/layers/recurrent.py", line 654, in __call__
    return super(RNN, self).__call__(inputs, **kwargs)
  File "/home/brscott4/.conda/envs/relernn/lib/python3.8/site-packages/tensorflow/python/keras/engine/base_layer.py", line 922, in __call__
    outputs = call_fn(cast_inputs, *args, **kwargs)
  File "/home/brscott4/.conda/envs/relernn/lib/python3.8/site-packages/tensorflow/python/keras/layers/recurrent_v2.py", line 408, in call
    inputs, initial_state, _ = self._process_inputs(inputs, initial_state, None)
  File "/home/brscott4/.conda/envs/relernn/lib/python3.8/site-packages/tensorflow/python/keras/layers/recurrent.py", line 848, in _process_inputs
    initial_state = self.get_initial_state(inputs)
  File "/home/brscott4/.conda/envs/relernn/lib/python3.8/site-packages/tensorflow/python/keras/layers/recurrent.py", line 636, in get_initial_state
    init_state = get_initial_state_fn(
  File "/home/brscott4/.conda/envs/relernn/lib/python3.8/site-packages/tensorflow/python/keras/layers/recurrent.py", line 1910, in get_initial_state
    return _generate_zero_filled_state_for_cell(self, inputs, batch_size, dtype)
  File "/home/brscott4/.conda/envs/relernn/lib/python3.8/site-packages/tensorflow/python/keras/layers/recurrent.py", line 2926, in _generate_zero_filled_state_for_cell
    return _generate_zero_filled_state(batch_size, cell.state_size, dtype)
  File "/home/brscott4/.conda/envs/relernn/lib/python3.8/site-packages/tensorflow/python/keras/layers/recurrent.py", line 2944, in _generate_zero_filled_state
    return create_zeros(state_size)
  File "/home/brscott4/.conda/envs/relernn/lib/python3.8/site-packages/tensorflow/python/keras/layers/recurrent.py", line 2939, in create_zeros
    return array_ops.zeros(init_state_size, dtype=dtype)
  File "/home/brscott4/.conda/envs/relernn/lib/python3.8/site-packages/tensorflow/python/ops/array_ops.py", line 2677, in wrapped
    tensor = fun(*args, **kwargs)
  File "/home/brscott4/.conda/envs/relernn/lib/python3.8/site-packages/tensorflow/python/ops/array_ops.py", line 2721, in zeros
    output = _constant_if_small(zero, shape, dtype, name)
  File "/home/brscott4/.conda/envs/relernn/lib/python3.8/site-packages/tensorflow/python/ops/array_ops.py", line 2662, in _constant_if_small
    if np.prod(shape) < 1000:
  File "<__array_function__ internals>", line 180, in prod
  File "/home/brscott4/.conda/envs/relernn/lib/python3.8/site-packages/numpy/core/fromnumeric.py", line 3045, in prod
    return _wrapreduction(a, np.multiply, 'prod', axis, dtype, out,
  File "/home/brscott4/.conda/envs/relernn/lib/python3.8/site-packages/numpy/core/fromnumeric.py", line 86, in _wrapreduction
    return ufunc.reduce(obj, axis, dtype, out, **passkwargs)
  File "/home/brscott4/.conda/envs/relernn/lib/python3.8/site-packages/tensorflow/python/framework/ops.py", line 748, in __array__
    raise NotImplementedError("Cannot convert a symbolic Tensor ({}) to a numpy"
NotImplementedError: Cannot convert a symbolic Tensor (bidirectional/forward_gru/strided_slice:0) to a numpy array.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant