Skip to content

An assignment to develop a visual attention system based on spiking neurons

License

Notifications You must be signed in to change notification settings

vvv-school/assignment_event-spiking-attention

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

13 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Event-driven Spiking Attention

Create an event-driven spiking attention map to identify interesting regions in the scene

Prerequisites

You should have learned about spiking neurons and how they are modelled in software, you should have a good knowledge of how to use the event-driven library to read and process events.

Important - Assignment 1 is required for the smoke test to run. Please make sure that you have make installed Assignment 1 so the yarpmanager can find the binary file

Tutorial 1

Tutorial 2

Assignment 1

Assignment

You must complete the missing code in the provided module to identify interesting regions in the event stream. Incoming events will inject energy into an array of spiking neurons. The overlap between neighbouring nearons results in local regions with multiple activations reaching the neuron's threshold and and the resulting output spike indicates the location of high activity. Importantly, the activity must also occur in a contiguous period of time, before built-up energy is disapated due to the leaky dynamics.

results

To accomplish this task you have to modify the provided module filling in the missing gaps highlighted by the comment // FILL IN THE CODE, to:

  1. Update a single neuron according to the leaky-integrate-and-fire dynamics.
  2. Update a square region of neurons given the centre position. The mean position of any neurons with an energy higher than the threshold must be calculated.
  3. Create an event<LabelledAE> from the centre position of fired neurons.
  4. Reset a region of neurons after a neuron has fired.
  5. Read the event-stream and perform the above operations, writing event<LabelledAE> events on an output port.

The saccadic suppression module will be used remove events from the event-stream due to icub-motion. Burst of data due to moving the event-cameras themselves will not produce attention ponits. Attention will therefore only be produced by independently moving objects.

You can test your code on any of the provided event-driven [datasets]((https://github.com/vvv-school/tutorial_event-driven-framework) using the event-spiking-attention yarpmanager app. The final attention events will be visualised on a yarpview windows and should correspond to the position of the shaken object.

When you are happy with your attention module, you can test your code Automatically: running the script test.sh in the smoke-test directory. The smoke-test will give a maximum of 7 marks.

results

Hints and Tips

  • Only the event-stream from the right camera will be used. The vPreProcess module is used to split the event-stream into separate left and right streams.
  • The provided yarpmanager application visualises the subthreshold values of the neurons. If processing is slow on your laptop, closing this window (or disconnecting the port) reduce the processing load. The subthreshold visualisation might be useful for debugging purposes.
  • A simple equation for a leaky integrate and fire neuron is v = 1.0 + v * e^(-(t-t_p)/tau), where v is the subthreshold energy, t is the current time, t_p is the previous time the neuron was updated, and tau is the decay factor.
  • event timestamps (ev::vEvent::stamp) have a maximum value of 2^24. Each increment of the timestamp represents 80ns, therefore the timestamp field will overflow and wrap around every ~1.3 seconds. Therefore to calculate the time difference between two events that have wrapped you can: if(v2->stamp < v1->stamp) v1->stamp -= ev::vtsHelper::max_stamp; then int dt = v2->stamp - v1->stamp will be positive.
  • If you want modify the default parameters please do so in the configuration of the RFmodule.

Extension to Live Robot Tests

If you are keen to test the event-cameras with your developed module we can run the algorithms live on the purple robot and look at the output. The ports to access the data on the robot are identical to the dataset you have been using and so the your module should interface to the robot without modification.

As a (recommended) optional challenge please extend your code with a gaze controller, which you are now experts at after the kinematics lesson. The module is already calculating the attention point in (u, v) coordinates and can be used to control the gaze of the robot. Once a gaze occurs the saccadic suppression should block the event-stream resulting in attention only to independently moving objects.

The material for the gaze controller lesson is available here if you need a refresher.

About

An assignment to develop a visual attention system based on spiking neurons

Resources

License

Stars

Watchers

Forks

Packages

No packages published