Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can I limit the spike amplitude around 100uV while keeping the noise to be far neurons with 40-300uV? #154

Open
ZoeChen96 opened this issue Jun 26, 2023 · 2 comments

Comments

@ZoeChen96
Copy link

Hi there,
I have a question about dataset generation. I used to generate spikes with amplitude 40-300uV templates and with far neuron noises based on these templates. However, to validate the results better, I want to generate spikes with an amplitude of only around 100uV. In that case, I can't choose the 'far neurons noise model, because we don't have too many templates restricted to around 100uV ( I assume).
Is that any possibility to set the spike amplitude and noise neuron amplitude separately? I will also attach the script now for generating a single channel, ~100uV spikes (3 spike classes).
Thanks a lot!

@alejoe91
Copy link
Member

Hi @ZoeChen96
Currently it's not possible, but I think it would be a good idea to allow to select different template files for the "recorded" neurons and the "noise" neurons.

@ZoeChen96
Copy link
Author

import MEArec as mr
import numpy as np
import scipy.io
import spikeinterface.extractors as se
import warnings
from pprint import pprint
import sys
warnings.simplefilter("ignore")
####################################parameters-Bernardo's detection dataset global settings######################################
probe       = 'monotrode'
fs          = 30000
n_exc       = 0
n_inh       = 3
duration    = 3600 #seconds
filtering   = False #filtering True/false
ref_per     = 7
noise_type  = 'uncorrelated' #uncorrelated/distance-correlated/far-neurons
####################################################################################
original_stdout = sys.stdout # Save a reference to the original standard output
dir_full_templates = '/ia/templates300_32kHz_minampnull_'+ probe +'.h5'


tempgen = mr.load_templates(dir_full_templates)
recording_params = mr.get_default_recordings_params()
# Set parameters
recording_params['spiketrains']['n_exc'] = n_exc
recording_params['spiketrains']['n_inh'] = n_inh

recording_params['spiketrains']['st_inh'] = 0
recording_params['spiketrains']['min_rate'] = 0.1

recording_params['spiketrains']['seed'] = 0
recording_params['spiketrains']['ref_per'] = ref_per

recording_params['templates']['min_amp'] = 90
recording_params['templates']['max_amp'] = 110
recording_params['templates']['seed'] = 0

recording_params['recordings']['modulation'] = 'none'
recording_params['recordings']['sdrand'] = 0
recording_params['recordings']['noise_mode'] = noise_type
recording_params['recordings']['noise_color'] = True
recording_params['recordings']['sync_rate'] = 0

# use chunk options
recording_params['recordings']['chunk_conv_duration'] = 20
recording_params['recordings']['chunk_noise_duration'] = 20
recording_params['recordings']['chunk_filter_duration'] = 20
recording_params['recordings']['seed'] = 0

#filter settings
recording_params['recordings']['fs'] = fs
recording_params['recordings']['filter'] = filtering
recording_params['recordings']['filter_cutoff'] = [300, 6000] # highpass from 0

#seeds settings
recording_params['seeds']['spiketrains'] = 0
recording_params['seeds']['templates'] = 0
recording_params['seeds']['convolution'] = 0
recording_params['seeds']['noise'] = 0


#################################### set looping parameters and generate datasets ######################################
noise          = [10,15,20]
spiking_rate = [120,150]
duration = [600,600]
#spiking_rate   = [1,10,20,30,40]
#duration = [3600,1800,900,600,600]
for i in noise:
    counter = 0
    for j in spiking_rate:
        print('generating recording for noise', str(i),' spiking rate',str(j),' neuron number',str(n_inh), 'duration' ,str(duration[counter]) )
        recording_params['recordings']['noise_level'] = i
        recording_params['spiketrains']['f_inh'] = j/n_inh
        recording_params['spiketrains']['duration'] = duration[counter]
        set_name = 'n'+str(i) + '_fr'+str(j)+'_d'+str(duration[counter])
        
        dir_full_recordings = '/imec/other/macaw/projectdata/mearec_datasets/detection_datasets/set'+ set_name + '/set' + set_name + '_recording.h5'
        dir_full_gttimes = '/a/set'+ set_name + '/set' + set_name + '_gttimes.mat'
        filename =         '/a+set_name  + '/set' + set_name +'_info.txt'
        recgen = mr.gen_recordings(templates=dir_full_templates, params=recording_params, verbose=False)
        # save recording
        mr.save_recording_generator(recgen, dir_full_recordings)
        #save gt_times
        if (n_exc+n_inh >0):
            sorting_GT = se.MEArecSortingExtractor(dir_full_recordings)
            scipy.io.savemat(dir_full_gttimes, {'gt_times': sorting_GT.to_spike_vector()})
        with open(filename, "w") as f:    
            sys.stdout = f # Change the standard output to the file we created. 
            pprint(recgen.info)
        sys.stdout = original_stdout
        counter = counter+1

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants