You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Jan 3, 2023. It is now read-only.
Could you add a code which extracts VGG features from images for image captioning task? This would be a nice starting point if someone wants to try playing with tuning the image captioning setup or just obtain image captions.
The text was updated successfully, but these errors were encountered:
This gist link shows how to transfer the trained alexnet model over to a network with only 10 outputs in the last fully connected layer. It may give enough information on how to get the feature extraction layers out of a VGG model and attach them to a different output network. let me know if that helps.
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Could you add a code which extracts VGG features from images for image captioning task? This would be a nice starting point if someone wants to try playing with tuning the image captioning setup or just obtain image captions.
The text was updated successfully, but these errors were encountered: