Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DAU-ConvNet not working with tf.keras or keras #5

Open
cloudscapes opened this issue Jan 31, 2020 · 2 comments
Open

DAU-ConvNet not working with tf.keras or keras #5

cloudscapes opened this issue Jan 31, 2020 · 2 comments

Comments

@cloudscapes
Copy link

cloudscapes commented Jan 31, 2020

Hi there,

I try to use the DAUs to replace standard convolutional blocks in a modified U-Net architecture but I do get these warnings and this error :

WARNING: Entity <bound method DAUConv2dTF.call of <dau_conv_tf.dau_conv.DAUConv2dTF object at 0x1a42a0f160>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, export AUTOGRAPH_VERBOSITY=10) and attach the full output. Cause: converting <bound method DAUConv2dTF.call of <dau_conv_tf.dau_conv.DAUConv2dTF object at 0x1a42a0f160>>: AssertionError: If not matching a CFG node, must be a block statement: <gast.gast.ImportFrom object at 0x1a42b4f748>

my code :

import tensorflow.compat.v1 as tf tf.logging.set_verbosity(tf.logging.ERROR) import numpy as np from tensorflow.compat.v1.keras.layers import Input, Conv2D, MaxPooling2D, Conv2DTranspose, concatenate, BatchNormalization, Activation, add from tensorflow.compat.v1.keras.models import Model, model_from_json from tensorflow.keras.activations import relu from dau_conv_tf import DAUConv2dTF

` def dau_bn(x, filters, dau_units, max_kernel_size):

max_kernel_size = max_kernel_size 
x = DAUConv2dTF(filters = filters, 
       dau_units = dau_units, 
       max_kernel_size = max_kernel_size , 
       strides=1, 
       data_format='channels_first',
       use_bias=False,
       weight_initializer=tf.random_normal_initializer(stddev=0.1),
       mu1_initializer = tf.random_uniform_initializer(minval=-tf.floor(max_kernel_size/2.0), 
                                                      maxval=tf.floor(max_kernel_size/2.0),dtype=tf.float32), 
       mu2_initializer = tf.random_uniform_initializer(minval=-tf.floor(max_kernel_size/2.0), 
                                                      maxval=tf.floor(max_kernel_size/2.0),dtype=tf.float32), 
       sigma_initializer=None,
       bias_initializer=None,
       weight_regularizer=None,
       mu1_regularizer=None,
       mu2_regularizer=None,
       sigma_regularizer=None,
       bias_regularizer=None,
       activity_regularizer=None,
       weight_constraint=None,
       mu1_constraint=None,
       mu2_constraint=None,
       sigma_constraint=None,
       bias_constraint=None,
       trainable=True,
       mu_learning_rate_factor=500, # additional factor for gradients of mu1 and mu2 
       name=None)(x)
x = BatchNormalization(axis=1, scale=False)(x)
x = Activation('relu')(x)
return x  `

` def conv2d_bn(x, filters ,num_row,num_col, padding = "same", strides = (1,1), activation = 'relu'):

        x = Conv2D(filters,(num_row, num_col), strides=strides, padding=padding, data_format ='channels_first' ,use_bias=False)(x)
        x = BatchNormalization(axis=1, scale=False)(x)
        if(activation == None):
            return x
        x = Activation(activation)(x)
        
        return x `

`def trans_conv2d(x, filters, num_row, num_col, padding='same', strides=(2, 2), name=None):

x = Conv2DTranspose(filters, (num_row, num_col), strides=strides, padding=padding,      data_format = 'channels_first')(x)
x = BatchNormalization(axis=1, scale=False)(x)

return x `

` def MultiResBlock(dau_units,max_kernel_size ,U,inp, alpha = 1.67):

        W = alpha * U
        
        shortcut = inp
        
        shortcut = conv2d_bn(shortcut, int(W*0.167) + int(W*0.333) +
                     int(W*0.5), 1, 1, activation=None, padding='same')
        
        conv3x3 = dau_bn(inp,int(W*0.167), dau_units, max_kernel_size )
        
       
        
        conv5x5 = dau_bn(conv3x3,int(W*0.333), dau_units , max_kernel_size )

       

        conv7x7 = dau_bn(conv5x5,int(W*0.5), dau_units,  max_kernel_size )
        
 
        
        out = concatenate([conv3x3, conv5x5, conv7x7], axis=1)
        out = BatchNormalization(axis=1)(out)
        out = add([shortcut, out])
        out = Activation('relu')(out)
        out = BatchNormalization(axis=1)(out)
        
        return  out `

`def ResPath(dau_units, max_kernel_size,filters, length, inp):
shortcut = inp
shortcut = conv2d_bn(shortcut, filters, 1, 1,
activation=None, padding='same')

out = conv2d_bn(inp, filters, 3, 3, activation='relu', padding='same')

out = add([shortcut, out])
out = Activation('relu')(out)
out = BatchNormalization(axis=1)(out)

for i in range(length-1):

    shortcut = out
    shortcut = conv2d_bn(shortcut, filters, 1, 1,
                          activation=None, padding='same')
    
    out = dau_bn(out, filters, dau_units , max_kernel_size)

   
    out = add([shortcut, out])
    out = Activation('relu')(out)
    out = BatchNormalization(axis=1)(out)

return out


def MultiResUnet(n_channels,height, width):

inputs = Input((n_channels,height, width ))

mresblock1 = MultiResBlock((2,2),9,32, inputs)
pool1 = MaxPooling2D(pool_size=(2, 2),data_format = 'channels_first')(mresblock1)
mresblock1 = ResPath((2,2),9,32, 4, mresblock1)

mresblock2 = MultiResBlock((2,2),9,32*2, pool1)
pool2 = MaxPooling2D(pool_size=(2, 2),data_format = 'channels_first')(mresblock2)
mresblock2 = ResPath((2,2),9,32*2, 3, mresblock2)

mresblock3 = MultiResBlock((2,2),9,32*4, pool2)
pool3 = MaxPooling2D(pool_size=(2, 2),data_format='channels_first')(mresblock3)
mresblock3 = ResPath((2,2),9,32*4, 2, mresblock3)

mresblock4 = MultiResBlock((2,2),9,32*8, pool3)
pool4 = MaxPooling2D(pool_size=(2, 2), data_format = 'channels_first')(mresblock4)
mresblock4 = ResPath((2,2),9,32*8, 1, mresblock4)

mresblock5 = MultiResBlock((2,2),9,32*16, pool4)

up6 = concatenate([trans_conv2d(mresblock5,
    32*8, 2, 2, strides=(2, 2), padding='same'), mresblock4], axis=1)
mresblock6 = MultiResBlock((2,2),9,32*8, up6)

up7 = concatenate([trans_conv2d(mresblock6,
    32*4, 2, 2, strides=(2, 2), padding='same'), mresblock3], axis=1)
mresblock7 = MultiResBlock((2,2),9,32*4, up7)

up8 = concatenate([trans_conv2d(mresblock7,
    32*2, 2, 2, strides=(2, 2), padding='same'), mresblock2], axis=1)
mresblock8 = MultiResBlock((2,2),9,32*2, up8)

up9 = concatenate([trans_conv2d(mresblock8,
    32, 2, 2, strides=( 2, 2), padding='same'), mresblock1], axis=1)
mresblock9 = MultiResBlock((2,2),9,32, up9)

conv10 = conv2d_bn(mresblock9, 1, 1, 1, activation='sigmoid')

model = Model(inputs=[inputs], outputs=[conv10])

return model


model = MultiResUnet(3,128, 128)
display(model.summary())

`

Now the error I get :

TypeError Traceback (most recent call last)

in
----> 1 model = MultiResUnet(3,128, 128)
2 display(model.summary())

in MultiResUnet(n_channels, height, width)
54 conv10 = conv2d_bn(mresblock9, 1, 1, 1, activation='sigmoid')
55
---> 56 model = Model(inputs=[inputs], outputs=[conv10])
57
58 return model

~/miniconda3/envs/MastersThenv/lib/python3.6/site-packages/tensorflow/python/keras/engine/training.py in init(self, *args, **kwargs)
127
128 def init(self, *args, **kwargs):
--> 129 super(Model, self).init(*args, **kwargs)
130 # initializing _distribution_strategy here since it is possible to call
131 # predict on a model without compiling it.

~/miniconda3/envs/MastersThenv/lib/python3.6/site-packages/tensorflow/python/keras/engine/network.py in init(self, *args, **kwargs)
165 self._init_subclassed_network(**kwargs)
166
--> 167 tf_utils.assert_no_legacy_layers(self.layers)
168
169 # Several Network methods have "no_automatic_dependency_tracking"

~/miniconda3/envs/MastersThenv/lib/python3.6/site-packages/tensorflow/python/keras/utils/tf_utils.py in assert_no_legacy_layers(layers)
397 'classes), please use the tf.keras.layers implementation instead. '
398 '(Or, if writing custom layers, subclass from tf.keras.layers rather '
--> 399 'than tf.layers)'.format(layer_str))
400
401

TypeError: The following are legacy tf.layers.Layers:
<dau_conv_tf.dau_conv.DAUConv2dTF object at 0x1a42a0f160>
<dau_conv_tf.dau_conv.DAUConv2dTF object at 0x1a42ac44e0>
<dau_conv_tf.dau_conv.DAUConv2dTF object at 0x1a42d1f9e8>
<dau_conv_tf.dau_conv.DAUConv2dTF object at 0x1a43aaf048>
<dau_conv_tf.dau_conv.DAUConv2dTF object at 0x1a43b6d358>
<dau_conv_tf.dau_conv.DAUConv2dTF object at 0x1a43ce27f0>
<dau_conv_tf.dau_conv.DAUConv2dTF object at 0x1a448cb0b8>
<dau_conv_tf.dau_conv.DAUConv2dTF object at 0x1a44a9ccc0>
<dau_conv_tf.dau_conv.DAUConv2dTF object at 0x1a44b00860>
<dau_conv_tf.dau_conv.DAUConv2dTF object at 0x1a453f3400>
<dau_conv_tf.dau_conv.DAUConv2dTF object at 0x1a454a37f0>
<dau_conv_tf.dau_conv.DAUConv2dTF object at 0x1a4554fd30>
<dau_conv_tf.dau_conv.DAUConv2dTF object at 0x1a45ce3e48>
<dau_conv_tf.dau_conv.DAUConv2dTF object at 0x1a45ebf518>
<dau_conv_tf.dau_conv.DAUConv2dTF object at 0x1a444bc0b8>
<dau_conv_tf.dau_conv.DAUConv2dTF object at 0x1a4665a588>
<dau_conv_tf.dau_conv.DAUConv2dTF object at 0x1a4ad5b518>
<dau_conv_tf.dau_conv.DAUConv2dTF object at 0x1a4adffac8>
<dau_conv_tf.dau_conv.DAUConv2dTF object at 0x1a4511c668>
<dau_conv_tf.dau_conv.DAUConv2dTF object at 0x1a4b3f5710>
<dau_conv_tf.dau_conv.DAUConv2dTF object at 0x1a4b4a5ac8>
<dau_conv_tf.dau_conv.DAUConv2dTF object at 0x1a44324048>
<dau_conv_tf.dau_conv.DAUConv2dTF object at 0x1a45e96b70>
<dau_conv_tf.dau_conv.DAUConv2dTF object at 0x1a445fe3c8>
<dau_conv_tf.dau_conv.DAUConv2dTF object at 0x1a4323aa58>
<dau_conv_tf.dau_conv.DAUConv2dTF object at 0x1a4bb59898>
<dau_conv_tf.dau_conv.DAUConv2dTF object at 0x1a4bd356d8>
<dau_conv_tf.dau_conv.DAUConv2dTF object at 0x1a435126d8>
<dau_conv_tf.dau_conv.DAUConv2dTF object at 0x1a4bd976a0>
<dau_conv_tf.dau_conv.DAUConv2dTF object at 0x1a437f65c0>
<dau_conv_tf.dau_conv.DAUConv2dTF object at 0x1a4c2bc2e8>
<dau_conv_tf.dau_conv.DAUConv2dTF object at 0x1a4c375668>
<dau_conv_tf.dau_conv.DAUConv2dTF object at 0x1a4c425c88>
To use keras as a framework (for instance using the Network, Model, or Sequential classes), please use the tf.keras.layers implementation instead. (Or, if writing custom layers, subclass from tf.keras.layers rather than tf.layers)

and my tf.keras.version == 2.2.4-tf
running on mac os HighSierra

Any help would be highly appreciated,

cheers, H

@skokec
Copy link
Owner

skokec commented Feb 6, 2020

Hi,

sorry for late reply as I wasn't able to find time before.

Do you actually get an error that it cannot go further? It appears that there seems to be some legacy issue, as I have created layers from examples of TensorFlow v1 based on tf.contrib.layer.conv2d(). It seams this is not compatible with your version of Keras layers. You will have to modify the base inheritance class for DAUConv2dTF to make ti work your version of Keras.

Best,
Domen

@cloudscapes
Copy link
Author

Hi,

You will have to modify the base inheritance class for DAUConv2dTF to make it work your version of Keras.

That's true but for the moment I just downgraded to TF1.9 and Keras 2.2.0 and got it working.

Cheers,
H

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants