Skip to content
Permalink

Comparing changes

This is a direct comparison between two commits made in this repository or its related repositories. View the default comparison for this range or learn more about diff comparisons.

Open a pull request

Create a new pull request by comparing changes across two branches. If you need to, you can also . Learn more about diff comparisons here.
base repository: crate/sqlalchemy-cratedb
Failed to load repositories. Confirm that selected base ref is valid, then try again.
Loading
base: 432cae8e21635a5dbb8c559b0d1a4ffe20b5e48b
Choose a base ref
..
head repository: crate/sqlalchemy-cratedb
Failed to load repositories. Confirm that selected head ref is valid, then try again.
Loading
compare: d1fb1e750339c3d55bdd0df738abce6080180f31
Choose a head ref
Showing with 13 additions and 21 deletions.
  1. +3 −12 .github/workflows/codeql.yml
  2. +2 −2 docs/dataframe.rst
  3. +6 −5 pyproject.toml
  4. +2 −2 tests/bulk_test.py
15 changes: 3 additions & 12 deletions .github/workflows/codeql.yml
Original file line number Diff line number Diff line change
@@ -33,20 +33,11 @@ jobs:
sqla-version: ['<1.4', '<1.5', '<2.1']

steps:
- name: Acquire sources
- name: Checkout
uses: actions/checkout@v4

- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: 3.11
architecture: x64
cache: 'pip'
cache-dependency-path:
pyproject.toml

- name: Initialize CodeQL
uses: github/codeql-action/init@v2
uses: github/codeql-action/init@v3
with:
languages: ${{ matrix.language }}
config-file: ./.github/codeql.yml
@@ -61,6 +52,6 @@ jobs:
pip install "sqlalchemy${{ matrix.sqla-version }}" --upgrade --pre
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@v2
uses: github/codeql-action/analyze@v3
with:
category: "/language:${{ matrix.language }}/sqla-version:${{ matrix.sqla-version }}"
4 changes: 2 additions & 2 deletions docs/dataframe.rst
Original file line number Diff line number Diff line change
@@ -76,8 +76,8 @@ The package provides a ``bulk_insert`` function to use the
workload across multiple batches, using a defined chunk size.

>>> import sqlalchemy as sa
>>> from pandas._testing import makeTimeDataFrame
>>> from crate.client.sqlalchemy.support import insert_bulk
>>> from pueblo.testing.pandas import makeTimeDataFrame
...
>>> # Define number of records, and chunk size.
>>> INSERT_RECORDS = 42
@@ -159,8 +159,8 @@ in a batched/chunked manner, using a defined chunk size, effectively using the
pandas implementation introduced in the previous section.

>>> import dask.dataframe as dd
>>> from pandas._testing import makeTimeDataFrame
>>> from crate.client.sqlalchemy.support import insert_bulk
>>> from pueblo.testing.pandas import makeTimeDataFrame
...
>>> # Define the number of records, the number of computing partitions,
>>> # and the chunk size of each database insert operation.
11 changes: 6 additions & 5 deletions pyproject.toml
Original file line number Diff line number Diff line change
@@ -92,12 +92,12 @@ dependencies = [
]
[project.optional-dependencies]
develop = [
"black<25",
"black<24",
"mypy<1.9",
"poethepoet<0.25",
"pyproject-fmt<1.8",
"ruff==0.1.14",
"validate-pyproject<0.17",
"pyproject-fmt<1.7",
"ruff==0.1.13",
"validate-pyproject<0.16",
]
doc = [
"crate-docs-theme>=0.26.5",
@@ -110,7 +110,8 @@ release = [
test = [
"dask",
"pandas<2.3",
"pytest<9",
"pueblo>=0.0.7",
"pytest<8",
"pytest-cov<5",
"pytest-mock<4",
]
4 changes: 2 additions & 2 deletions tests/bulk_test.py
Original file line number Diff line number Diff line change
@@ -176,7 +176,7 @@ def test_bulk_save_pandas(self, mock_cursor):
"""
Verify bulk INSERT with pandas.
"""
from pandas._testing import makeTimeDataFrame
from pueblo.testing.pandas import makeTimeDataFrame
from sqlalchemy_cratedb import insert_bulk

# 42 records / 8 chunksize = 5.25, which means 6 batches will be emitted.
@@ -216,7 +216,7 @@ def test_bulk_save_dask(self, mock_cursor):
Verify bulk INSERT with Dask.
"""
import dask.dataframe as dd
from pandas._testing import makeTimeDataFrame
from pueblo.testing.pandas import makeTimeDataFrame
from sqlalchemy_cratedb import insert_bulk

# 42 records / 4 partitions means each partition has a size of 10.5 elements.