-
Notifications
You must be signed in to change notification settings - Fork 77
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make RF response more homogeneous vs. zenith #1317
Comments
I tested the cosZD interpolation for energy reconstruction, and seems to work very well (see below) |
Hi Something like this:
This would be integrated in the training and inference functions and could be completely transparent to users. |
I made an implementation of the proposed RF-prediction interpolation approach in #1320 @vuillaut it seems simpler to me than the linear_reg + RF . For example, in what I implemented there is no need to change anything in the training part (which itself involved the use of RF predictions), because if the pointing for the event on which the RF is applied matches one of the training nodes' pointings (which is the case during the training) the interpolation is unnecessary. |
Besides, this allows us to use the already produced RFs (though that is a minor advantage) |
We know that at high zeniths (say, above ~50 deg) the output of the random forests depends strongly on zenith (because image parameters change quickly with zenith).
Since the RFs are trained on MC with a discrete distribution of pointings, when applied to the data, the distributions of reconstructed quantities (e.g. gammaness or energy) show sudden jumps when the telescope pointing crosses the middle point in zenith between two training nodes (sin_azimuth is also part of the training, but luckily its effect is negligible compared to that of zenith). In the past, for the analysis of some high-zenith observations, we have dealt with this problem just using a different training sample, with smaller steps in zenith.
A possible general solution (that would allow us to use our existing high-zenith "coarse-grid" training MC) could be the following:
This would get rid of the jumps, and result in a more accurate reconstruction between the training nodes.
All we need is to know the pointings used in the training (we could save them together with the RFs). Then replace all calls to "predict" by calls to a new function which calls predict twice and does the interpolation.
Note that besides real data, this will also affect the MC test nodes, which correspond to different pointings than the training ones. Hence I think this change would reduce the systematic errors of the instrument response functions (e.g. eff. area) at high zeniths.
The text was updated successfully, but these errors were encountered: