-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Review maximum step length values and production cuts #174
Comments
Related: #162 |
We should also discuss ideas to tune the cut values in specific regions of the detectors, like close to the surface, without introducing new sensitive volumes. |
How exactly did you cut away steps? As I do not understand how the efficiency can go up by removing energy depositions?
You are observing the same data, but just change something in the readout? Because if you "shrinked" the detector and changed something about the physical volume, there might be an unforeseen Issue with production-cuts. But if you kept everything exactly the same, and only filtered out some vertices in the read-out, i do not understand (from a point of causality) how it can go up? |
This is the full script, for reference (also an interesting example of quick post-processing of the single-table remage output). I have heavily validated so i think there is no bug. import re
import gc
import glob
from concurrent.futures import ThreadPoolExecutor
import matplotlib.pyplot as plt
import hist
import awkward as ak
import numpy as np
import pyg4ometry
from legendhpges import make_hpge
from lgdo import lh5
plt.rcParams["figure.figsize"] = (12, 4)
plt.rcParams["figure.dpi"] = 96
from legendmeta import LegendMetadata
lmeta = LegendMetadata("legend-metadata")
chmap = lmeta.channelmap(lmeta.dataprod.runinfo.p03.r000.phy.start_key).group("system").geds.map("daq.rawid")
l200reg = pyg4ometry.gdml.Reader("l200.gdml").getRegistry()
# get coordinates local to the polycone origin
def local_coords(coords, origin):
# steps and vertices locations are in units of meter
return np.array([c * 1E3 - o for c, o in zip([coords.xloc, coords.yloc, coords.zloc], origin)]).T
def process(sim_dir, with_av=False, detectors=sorted(chmap.keys())):
# registry needed to store the HPGe objects (see later)
reg = pyg4ometry.geant4.Registry()
def worker(rawid):
name = chmap[rawid].name
# read the vertices data of the simulation in this detector
vertices = lh5.read_as("stp/vertices", f"{sim_dir}/{name}.lh5", "ak")
# read the step data from the simulation in this detector
steps = lh5.read_as("stp/germanium", f"{sim_dir}/{name}.lh5", "ak")
# if we want to take active volume into account, we need
# to restrict the analysis to the vertices contained in it
if with_av:
# HPGe object needed to compute distance to surface
hpge = make_hpge(chmap[rawid], registry=reg, allow_cylindrical_asymmetry=False)
# the step positions will be in global coordinates, we need
# to make them local to the detector if we want to calculate
# distances to the surface
hpge_pos = l200reg.physicalVolumeDict[name].position.eval()
# just a shortcut for later, needed to select n+ surfaces
nplus = np.where(np.array(hpge.surfaces) == "nplus")
# get the FCCD
fccd = chmap[rawid].characterization.combined_0vbb_analysis.fccd_in_mm.value
# now, need to select only vertices in the active volume
# convert to local vertex coordinates
vtx_coords = local_coords(vertices, hpge_pos)
# get the distance
vtx_distance_nplus = hpge.distance_to_surface(vtx_coords, nplus)
# find g4 event IDs of vertices in the active volume (hard step)
av_evtids = np.unique(vertices[vtx_distance_nplus >= fccd].evtid)
n_g4ev = len(av_evtids)
# now filter out dead-layer vertices (events) from the step table
steps = steps[np.isin(steps.evtid, av_evtids)]
# ok, now we need to calculate the active energy
stp_coords = local_coords(steps, hpge_pos)
stp_distance_nplus = hpge.distance_to_surface(stp_coords, nplus)
# apply activeness model
# but only to steps in the source detector
steps["edep_active"] = ak.where(
(steps.det_uid == rawid) & (stp_distance_nplus < fccd),
0, steps.edep # energy set to zero in the dead layer
)
# let's do some bookkeeping
del vertices, vtx_coords, vtx_distance_nplus, stp_distance_nplus, stp_coords, hpge
else:
n_g4ev = len(np.unique(vertices.evtid))
# now we need to do some event building by grouping
# by event id
d = ak.unflatten(steps, ak.run_lengths(steps.evtid))
# ...and by detector
d = d[ak.argsort(d.det_uid)] # sorting needed first
d = ak.unflatten(d, ak.ravel(ak.run_lengths(d.det_uid)), axis=-1)
# let's apply an energy threshold to hits of 25 keV, like in data
d = d[ak.sum(d.edep_active if with_av else d.edep, axis=-1) > 25]
# and keep only multiplicity==1 events
d = d[ak.num(d, axis=-2) < 2]
# let's get detector ID and energy for each event and unflatten everything for convenience
rawids = ak.ravel(ak.firsts(d.det_uid, axis=-1))
energy = ak.ravel(ak.sum(d.edep_active if with_av else d.edep, axis=-1))
# we want to look only at the energy recorded in this detector
energy = energy[rawids == rawid]
# define the energy acceptance window for events in the 0vbb peak at the Q-value
surv = ak.sum((energy > 2038) & (energy < 2040))
# again some bookkeeping and ping the garbage collector
del steps
gc.collect()
# display progress
print(">", end="")
# and return a struct with all the relevant output data
return name, {
"c_eff": surv / n_g4ev,
"surv": surv,
"n_g4ev": n_g4ev,
"e_hist": hist.new.Reg(2100, 0, 2100, name="energy").Double().fill(energy)
}
# part of the progress bar
print("<start", " " * (len(chmap) - 9), "end>")
# run processing in parallel with shared memory, this will not give a big
# speedup since the tasks are mostly CPU bound. should give ProcessPoolExecutor
# a try but it requires some changes in the worker() code and I'm too lazy
with ThreadPoolExecutor(max_workers=8) as executor:
return dict(executor.map(worker, detectors)) This is a plot of the vertices/steps close to a surface, to demonstrate that i'm correctly removing steps/vertices in the dead layer: |
I think Luigi only runs the simulation once and then changes the FCCD in post-processing.
in particular "safety" is Geant4 jargon referring to the distance to the surface (I think it can also include other terms). So for events close to the surface (within the production cut length), geant4 will always produce the brems while further out it will stack the particles. This affects the results when simulating with an FCCD since a particle may have a range long enough to cross the boundary of the AV but not to reach the detector surface. I think we have to consider carefully this issue since we are probably very sensitive to even small energy losses in the observables of interest (gamma line counts), due to the excellent energy resolution. |
@EricMEsch maybe we can think of a nice solution involving the calculation of the distance to surface to have eg. small cuts within 2 mm of the surface? We should think of ideas to keep both a fast simulation, light output and accurate results close to the surface |
But the cut values for the |
I am not sure, still you have an effect close to the border of the AV region. |
Ahh, you sample randomly inside of the volume and only consider 0vbb events, where the vertex was inside of the active volume. That makes way more sense, my mistake. I somehow thought you were only removing energy depositions, while keeping the total number of events the same (despite you clarifying that you removed vertices and steps, my bad) |
I think that this might rather be an effect of the step-limit rather than the production cuts. As Because in
It would be pretty interesting to check the average step-size of your simulated events @gipert and see if that confirms this |
It's very important to review these values, which were blindly imported from MaGe, and think about how to check their impact (related: #162). They should be set as high as possible, to reduce computing time and output size.
Relevant source code:
remage/src/RMGPhysics.cc
Line 355 in 2202460
remage/src/RMGPhysics.cc
Line 397 in 2202460
We should also use different cuts for different volumes, if possible!
The text was updated successfully, but these errors were encountered: