Skip to content

Commit

Permalink
fixed dspeed version
Browse files Browse the repository at this point in the history
  • Loading branch information
ggmarshall committed Sep 29, 2023
2 parents 8935d8e + 9175eaf commit 7614f06
Show file tree
Hide file tree
Showing 8 changed files with 48 additions and 28 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/distribution.yml
Original file line number Diff line number Diff line change
Expand Up @@ -38,6 +38,6 @@ jobs:
name: artifact
path: dist

- uses: pypa/[email protected].7
- uses: pypa/[email protected].10
with:
password: ${{ secrets.pypi_password }}
6 changes: 2 additions & 4 deletions .github/workflows/main.yml
Original file line number Diff line number Diff line change
Expand Up @@ -4,10 +4,8 @@ on:
push:
branches:
- main
- refactor
- 'releases/**'
pull_request:
merge_group:
release:

concurrency:
Expand All @@ -22,7 +20,7 @@ jobs:
strategy:
fail-fast: false
matrix:
python-version: ['3.9', '3.10']
python-version: ['3.9', '3.10', '3.11']
os: [ubuntu-latest, macOS-latest]

steps:
Expand Down Expand Up @@ -54,7 +52,7 @@ jobs:
run: |
python -m pip install --upgrade pip wheel setuptools
python -m pip install --upgrade .[test]
pytest --cov=pygama --cov-report=xml
python -m pytest --cov=pygama --cov-report=xml
- name: Upload Coverage to codecov.io
uses: codecov/codecov-action@v3
with:
Expand Down
14 changes: 7 additions & 7 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ repos:
- id: trailing-whitespace

- repo: https://github.com/asottile/setup-cfg-fmt
rev: "v2.2.0"
rev: "v2.4.0"
hooks:
- id: setup-cfg-fmt

Expand All @@ -36,7 +36,7 @@ repos:
- id: isort

- repo: https://github.com/asottile/pyupgrade
rev: "v3.3.1"
rev: "v3.8.0"
hooks:
- id: pyupgrade
args: ["--py38-plus"]
Expand All @@ -47,14 +47,14 @@ repos:
- id: black-jupyter

- repo: https://github.com/pre-commit/mirrors-mypy
rev: "v1.1.1"
rev: "v1.4.1"
hooks:
- id: mypy
files: src
stages: [manual]

- repo: https://github.com/hadialqattan/pycln
rev: "v2.1.3"
rev: "v2.1.5"
hooks:
- id: pycln
exclude: ^src/pygama/pargen
Expand Down Expand Up @@ -85,12 +85,12 @@ repos:
stages: [manual]

- repo: https://github.com/codespell-project/codespell
rev: "v2.2.4"
rev: "v2.2.5"
hooks:
- id: codespell

- repo: https://github.com/shellcheck-py/shellcheck-py
rev: "v0.9.0.2"
rev: "v0.9.0.5"
hooks:
- id: shellcheck

Expand All @@ -103,7 +103,7 @@ repos:
- id: rst-inline-touching-normal

- repo: https://github.com/pre-commit/mirrors-prettier
rev: "v3.0.0-alpha.6"
rev: "v3.0.0-alpha.9-for-vscode"
hooks:
- id: prettier
types_or: [json]
14 changes: 10 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,9 +13,15 @@

*pygama* is a Python package for:

* converting physics data acquisition system output to [LH5-format](https://github.com/legend-exp/legend-data-format-specs) HDF5 files
* performing bulk digital signal processing (DSP) on time-series data
* optimizing DSP routines and tuning associated analysis parameters
* generating and selecting high-level event data for further analysis
- converting physics data acquisition system output to
[LH5-format](https://legend-exp.github.io/legend-data-format-specs) HDF5
files (functionality provided by the
[legend-pydataobj](https://legend-pydataobj.readthedocs.io) and
[legend-daq2lh5](https://legend-daq2lh5.readthedocs.io) packages)
- performing bulk digital signal processing (DSP) on time-series data
(functionality provided by the [dspeed](https://dspeed.readthedocs.io)
package)
- optimizing DSP routines and tuning associated analysis parameters
- generating and selecting high-level event data for further analysis

Check out the [online documentation](https://pygama.readthedocs.io).
2 changes: 1 addition & 1 deletion codecov.yml
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ coverage:
status:
project:
default:
target: 20%
informational: true
patch: false

github_checks:
Expand Down
8 changes: 4 additions & 4 deletions setup.cfg
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ url = https://github.com/legend-exp/pygama
author = The LEGEND collaboration
maintainer = The LEGEND collaboration
license = GPL-3.0
license_file = LICENSE
license_files = LICENSE
classifiers =
Development Status :: 4 - Beta
Intended Audience :: Developers
Expand All @@ -32,11 +32,11 @@ project_urls =
packages = find:
install_requires =
colorlog
dspeed==1.2.*
dspeed>=1.2
h5py>=3.2
iminuit
legend-daq2lh5==1.0.*
legend-pydataobj==1.1.*
legend-daq2lh5>=1.0
legend-pydataobj>=1.1
matplotlib
numba!=0.53.*,!=0.54.*,!=0.57
numpy>=1.21
Expand Down
21 changes: 16 additions & 5 deletions src/pygama/evt/tcm.py
Original file line number Diff line number Diff line change
Expand Up @@ -20,11 +20,22 @@ def generate_tcm_cols(
coincidence data (e.g. hit times from different channels). Returns 3
:class:`numpy.ndarray`\ s representing a vector-of-vector-like structure:
two flattened arrays ``array_id`` (e.g. channel number) and ``array_idx``
(e.g. hit ID) that specify the location in the input ``coin_data`` of each
datum belonging to a coincidence event, and a ``cumulative_length`` array
that specifies which rows of the other two output arrays correspond to
which coincidence event. These can be used to retrieve other data at the
same tier as the input data into coincidence structures.
(e.g. hit index) that specify the location in the input ``coin_data`` of
each datum belonging to a coincidence event, and a ``cumulative_length``
array that specifies which rows of the other two output arrays correspond
to which coincidence event. These can be used to retrieve other data at
the same tier as the input data into coincidence structures.
The 0'th entry of ``cumulative_length`` contains the number of hits in the
zeroth coincidence event, and the i'th entry is set to
``cumulative_length[i-1]`` plus the number of hits in the i'th event.
Thus, the hits of the i'th event can be found in rows
``cumulative_length[i-1]`` to ``cumulative_length[i] - 1`` of ``array_id``
and ``array_idx``.
An example: ``cumulative_length = [4, 7, ...]``. Then rows 0 to 3 in
`array_id` and `array_idx` correspond to the hits in event 0, rows 4 to 6
correspond to event 1, and so on.
Makes use of :func:`pandas.concat`, :meth:`pandas.DataFrame.sort_values`,
and :meth:`pandas.DataFrame.diff` functions:
Expand Down
9 changes: 7 additions & 2 deletions src/pygama/flow/data_loader.py
Original file line number Diff line number Diff line change
Expand Up @@ -873,8 +873,13 @@ def build_hit_entries(
tb_df[f"{low_level}_idx"] = tb_df.index

# final DataFrame
f_entries = pd.concat((f_entries, tb_df), ignore_index=True)[entry_cols]
# end tb loop
if f_entries.empty:
f_entries = tb_df
else:
f_entries = pd.concat((f_entries, tb_df), ignore_index=True)[
entry_cols
]

if self.merge_files:
f_entries["file"] = file
if in_memory:
Expand Down

0 comments on commit 7614f06

Please sign in to comment.