From d295c066eff2e12c749a45fca2263bb7034abee3 Mon Sep 17 00:00:00 2001
From: James Beilsten-Edmands
<30625594+jbeilstenedmands@users.noreply.github.com>
Date: Tue, 17 Sep 2024 18:37:39 +0100
Subject: [PATCH 01/16] Remove superseded multi-crystal tutorial (#2738)
Fixes #1935
---
doc/sphinx/documentation/tutorials/index.rst | 1 -
.../multi_crystal_symmetry_and_scaling.rst | 274 ------------------
newsfragments/2738.misc | 1 +
3 files changed, 1 insertion(+), 275 deletions(-)
delete mode 100644 doc/sphinx/documentation/tutorials/multi_crystal_symmetry_and_scaling.rst
create mode 100644 newsfragments/2738.misc
diff --git a/doc/sphinx/documentation/tutorials/index.rst b/doc/sphinx/documentation/tutorials/index.rst
index abf180cf5df..7430b9c63ed 100644
--- a/doc/sphinx/documentation/tutorials/index.rst
+++ b/doc/sphinx/documentation/tutorials/index.rst
@@ -36,7 +36,6 @@ Advanced command-line tutorials
correcting_poor_initial_geometry_tutorial
centring_vs_pseudocentring
multi_lattice_tutorial
- multi_crystal_symmetry_and_scaling
metrology_corrections
multi_crystal_analysis
br_lyso_multi
diff --git a/doc/sphinx/documentation/tutorials/multi_crystal_symmetry_and_scaling.rst b/doc/sphinx/documentation/tutorials/multi_crystal_symmetry_and_scaling.rst
deleted file mode 100644
index 8c2bb948f04..00000000000
--- a/doc/sphinx/documentation/tutorials/multi_crystal_symmetry_and_scaling.rst
+++ /dev/null
@@ -1,274 +0,0 @@
-.. raw:: html
-
-
- This tutorial requires a DIALS 3 installation.
- Please click here to go to the tutorial for DIALS 2.2.
-
-
-Multi-crystal symmetry analysis and scaling with DIALS
-======================================================
-
-Introduction
-------------
-
-Recent additions to DIALS and xia2 have enabled multi-crystal analysis to be
-performed following integration. These tools are particularly relevant
-for analysis of many partial-datasets, which may be the only practical way of
-performing data collections for certain crystals. After integration, the
-space group symmetry can be investigated by testing for the presence of symmetry
-operations relating the integrated intensities of groups of reflections - the
-program to perform this is analysis is :samp:`dials.symmetry` (with algorithms
-similar to those of the program Pointless_).
-Another thing to consider is that for certain space groups (polar space groups),
-there is an inherent ambiguity in the way that the diffraction pattern can be
-indexed. In order to combine multiple datasets from these space groups, one must
-reindex all data to a consistent setting, which can be done with the program
-:samp:`dials.cosym` (see `Gildea and Winter`_ for details).
-Finally, the data must be scaled, to correct for experimental effects such as
-differences in crystal size/illuminated volume and radiation damage - this can
-be done with the program :samp:`dials.scale` (with algorithms similar to those
-of the program Aimless_). After the data has been scaled, choices
-can then be made about applying a resolution limit to exclude certain regions
-of the data which may be negatively affected by radiation damage.
-
-In this tutorial, we shall investigate a multi-crystal dataset collected on
-the VMXi beamline, Diamond's automated facility for data collection from
-crystallisation experiments *in-situ*. The dataset consists of four repeats of
-a 60-degree rotation measurement on a crystal of Proteinase K, taken at different
-locations on the crystal. We shall start with the integrated reflections and
-experiments files generated by running the automated processing software
-:samp:`xia2` with :samp:`pipeline=dials`.
-Have a look at the :doc:`processing_in_detail_betalactamase` tutorial if you
-want to know more about the different processing steps up to this point.
-
-.. Note::
- To obtain the data for this tutorial you can run
- :samp:`dials.data get vmxi_proteinase_k_sweeps`. If you are at Diamond
- Light Source on BAG training then the data are already available.
- After typing :samp:`module load bagtraining` you'll be moved to a working
- folder, with the data already located in the :samp:`tutorial-data/ccp4/integrated_files`
- subdirectory. The processing in this tutorial will produce quite a few files,
- so it's recommended to make an move to new directory::
-
- mkdir multi_crystal
- cd multi_crystal
-
-
-xia2.multiplex
---------------
-The easiest way to run these tools for a multi-dataset analysis is through the
-program :samp:`xia2.multiplex`.
-This runs several DIALS programs, including the programs described above, while
-producing useful plots and output files.
-
-To run :samp:`xia2.multiplex`, we must provide the path to the input integrated files from
-:samp:`dials.integrate`:
-
-.. dials_tutorial_include:: multi_crystal/xia2.multiplex.cmd
-
-.. container:: toggle
-
- .. container:: header
-
- **Show/Hide Log**
-
- .. dials_tutorial_include:: multi_crystal/xia2.multiplex.log
-
-This runs :samp:`dials.cosym` to analyse the Laue symmetry and reindex all datasets
-consistently, scales the data with :samp:`dials.scale`,
-calculates a resolution limit with :samp:`dials.estimate_resolution` and reruns
-:samp:`dials.scale` with the determined resolution cutoff. The
-final dataset is exported to an unmerged mtz and a
-`HTML report `_
-is generated. The easiest way to see the results is to open the
-`HTML report `_
-in your browser of choice e.g.::
-
- firefox xia2.multiplex.html
-
-Provided is a summary of the merging statistics as well as several plots, please
-explore these for a few minutes now!
-This dataset results in good merging statistics, however if you navigate to the
-"Analysis by batch" tab in "All data", you will see that the fourth dataset has
-poorer statistics compared to the others. Let's repeat the processing manually
-to explore the different steps and address this issue.
-
-Manual reprocessing
--------------------
-The first step is Laue/Patterson group analysis using
-:doc:`dials.cosym <../programs/dials_cosym>`:
-
-.. dials_tutorial_include:: multi_crystal/dials.cosym.cmd
-
-.. dials_tutorial_include:: multi_crystal/dials.cosym.log
- :start-at: Scoring all possible sub-groups
- :end-before: Writing html report
-
-.. container:: toggle
-
- .. container:: header
-
- **Show/Hide Log**
-
- .. dials_tutorial_include:: multi_crystal/dials.cosym.log
-
-
-As you can see, the :math:`P\,4/m\,m\,m` Patterson group is found with the highest confidence.
-For the corresponding space group, the mirror symmetries are removed to give :math:`P\,4\,2\,2`,
-as the chiral nature of macromolecules means we have a restricted choice of space
-groups. In this example, all datasets were indexed consistently, but this is not
-the case in general.
-
-Next, the data can be scaled:
-
-.. dials_tutorial_include:: multi_crystal/dials.scale.cmd
-
-From the merging statistics it is clear that the data quality is good out to the
-furthest resolution (:math:`CC_{1/2} > 0.3`), which can be confirmed by a resolution analysis:
-
-.. dials_tutorial_include:: multi_crystal/dials.estimate_resolution.cmd
-
-.. dials_tutorial_include:: multi_crystal/dials.estimate_resolution.log
- :start-at: Resolution cc_half
- :end-at: Resolution cc_half
-
-.. container:: toggle
-
- .. container:: header
-
- **Show/Hide Log**
-
- .. dials_tutorial_include:: multi_crystal/dials.estimate_resolution.log
-
-
-If the resolution limit was lower than the extent of the data, scaling would
-be rerun with a new resolution limit, for example:
-
-.. dials_tutorial_include:: multi_crystal/dials.scale_cut.cmd
-
-.. container:: toggle
-
- .. container:: header
-
- **Show/Hide Log**
-
- .. dials_tutorial_include:: multi_crystal/dials.scale_cut.log
-
-For exploring the scaling results, a wide variety of scaling and merging plots
-can be found in the :samp:`dials.scale.html` report generated by :samp:`dials.scale`.
-
-Almost there
-------------
-As mentioned previously, the fourth dataset is giving significantly higher
-R-merge values and much lower I/sigma.
-Therefore the question one must ask is if it is better to exclude this dataset.
-We can get some useful information about the agreement between datasets by
-running the program :samp:`dials.compute_delta_cchalf`. This program implements
-a version of the algorithms described in Assmann_ *et al.* :
-
-.. dials_tutorial_include:: multi_crystal/dials.compute_delta_cchalf.cmd
-
-.. dials_tutorial_include:: multi_crystal/dials.compute_delta_cchalf.log
- :start-at: # Datasets:
- :end-before: Writing table
-
-.. container:: toggle
-
- .. container:: header
-
- **Show/Hide Log**
-
- .. dials_tutorial_include:: multi_crystal/dials.compute_delta_cchalf.log
-
-It looks like we could get a significantly better :math:`CC_{1/2}` by excluding the final
-dataset - it has a negative :math:`\Delta CC_{1/2}`. But how bad is too bad that it warrants
-exclusion? Unfortunately this is a difficult question to answer and it may be the
-case that one would need to refine several structures with different data excluded
-to properly address this question.
-If we had many datasets and only a small fraction had a very large negative :math:`\Delta CC_{1/2}`
-then one could argue that these measurements are not drawn from the same population
-as the rest of the data and should be excluded.
-
-To see the effect of removing the last dataset (dataset '3'), we can rerun
-:samp:`dials.scale` (note that this will overwrite the previous scaled files):
-
-.. dials_tutorial_include:: multi_crystal/dials.scale_exclude.cmd
-
-.. container:: toggle
-
- .. container:: header
-
- **Show/Hide Log**
-
- .. dials_tutorial_include:: multi_crystal/dials.scale_exclude.log
-
-The overall merging statistics look significantly improved and therefore
-one would probably proceed with the first three datasets::
-
- Resolution: 68.40 - 1.78 > 68.40 - 1.79
- Observations: 222563 > 166095
- Unique reflections: 16534 > 16285
- Redundancy: 13.5 > 10.2
- Completeness: 68.18% > 67.56%
- Mean intensity: 45.3 > 46.0
- Mean I/sigma(I): 25.0 > 26.1
- R-merge: 0.132 > 0.059
- R-meas: 0.136 > 0.062
- R-pim: 0.033 > 0.017
-
-
-We could have also excluded a subset of images, for example using the option
-:samp:`exclude_images=3:301:600` to exclude the last 300 images of dataset 3.
-This option could be used to exclude the end of a dataset that was showing
-significant radiation damage, or if the crystal had moved out of the beam part-way
-through the measurement.
-
-It is also worth checking the assigned space group using :samp:`dials.symmetry`.
-In ``dials.cosym``, only the Laue/Patterson group was tested to determine a space
-group of :math:`P\,4\,2\,2`. However, a number of other MX space groups are possible for the
-Laue group (due to the possibility of screw-axes), such as :math:`P\,4\,2_1\,2`,
-:math:`P\,4_1\,2\,2` etc. The screw-axes tests are performed by :samp:`dials.symmetry`, and we can disable the
-Laue group testing as we are already confident about this:
-
-.. dials_tutorial_include:: multi_crystal/dials.symmetry.cmd
-
-.. dials_tutorial_include:: multi_crystal/dials.symmetry.log
- :start-after: Laue group
- :end-before: Saving reindexed experiments
-
-.. container:: toggle
-
- .. container:: header
-
- **Show/Hide Log**
-
- .. dials_tutorial_include:: multi_crystal/dials.symmetry.log
-
-By analysing the sets of reflections we expect to be present and absent, the
-existence of the :math:`4_1` and :math:`2_1` screw axes are confirmed, hence the space group is
-assigned as :math:`P\,4_1\,2_1\,2`.
-Note that we can do this analysis before or after scaling, as we only need to know
-the Laue group for scaling, however it is preferable to do this after scaling as
-outliers may have been removed by scaling.
-
-Finally, we must merge the data and produce an MTZ file for downstream structure
-solution:
-
-.. dials_tutorial_include:: multi_crystal/dials.merge.cmd
-
-.. container:: toggle
-
- .. container:: header
-
- **Show/Hide Log**
-
- .. dials_tutorial_include:: multi_crystal/dials.merge.log
-
-This merges the data and performs a truncation procedure, to give a merged MTZ
-file containing intensities and strictly-positive structure factors (Fs).
-
-
-.. _Pointless: http://www.ccp4.ac.uk/html/pointless.html
-.. _`Gildea and Winter`: https://doi.org/10.1107/S2059798318002978
-.. _Aimless: http://www.ccp4.ac.uk/html/aimless.html
-.. _Assmann: https://doi.org/10.1107/S1600576716005471
diff --git a/newsfragments/2738.misc b/newsfragments/2738.misc
new file mode 100644
index 00000000000..c469ec8df09
--- /dev/null
+++ b/newsfragments/2738.misc
@@ -0,0 +1 @@
+Remove outdated multi crystal tutorial, has been superseded by a better tutorial
From 65a4d2d46b428821b1e98c71481d646d9f0602e1 Mon Sep 17 00:00:00 2001
From: James Beilsten-Edmands
<30625594+jbeilstenedmands@users.noreply.github.com>
Date: Tue, 17 Sep 2024 20:36:08 +0100
Subject: [PATCH 02/16] Suppress output of scipy optimize warning in resolution
fitting (#2737)
Fixes #2640
---
newsfragments/2737.bugfix | 1 +
src/dials/util/resolution_analysis.py | 6 ++++--
2 files changed, 5 insertions(+), 2 deletions(-)
create mode 100644 newsfragments/2737.bugfix
diff --git a/newsfragments/2737.bugfix b/newsfragments/2737.bugfix
new file mode 100644
index 00000000000..28e3c503c19
--- /dev/null
+++ b/newsfragments/2737.bugfix
@@ -0,0 +1 @@
+``dials.resolution_analysis``: Suppress output of potential scipy OptimizeWarning.
diff --git a/src/dials/util/resolution_analysis.py b/src/dials/util/resolution_analysis.py
index 73476094275..a86e0739da5 100644
--- a/src/dials/util/resolution_analysis.py
+++ b/src/dials/util/resolution_analysis.py
@@ -8,6 +8,7 @@
import logging
import math
import typing
+import warnings
import numpy as np
import scipy.optimize
@@ -91,8 +92,9 @@ def tanh_fit(x, y, degree=None, n_obs=None):
p0 = np.array([0.2, 0.4]) # starting parameter estimates
sigma = np.array(standard_errors)
x = np.array(x)
-
- result = scipy.optimize.curve_fit(tanh_cchalf, x, y, p0, sigma=sigma)
+ with warnings.catch_warnings():
+ warnings.simplefilter("ignore", scipy.optimize.OptimizeWarning)
+ result = scipy.optimize.curve_fit(tanh_cchalf, x, y, p0, sigma=sigma)
r = result[0][0]
s0 = result[0][1]
From fbe503b25b82f0ee1cf62628f71287ee7b3b1826 Mon Sep 17 00:00:00 2001
From: biochem_fan
Date: Fri, 20 Sep 2024 00:25:59 +0900
Subject: [PATCH 03/16] Multi-panel polygon masking in dials.image_viewer
(#2735)
Enabled multi-panel polygon masking in dials.image_viewer
---
newsfragments/2735.feature | 1 +
src/dials/util/image_viewer/mask_frame.py | 177 ++++++++++++++--------
2 files changed, 119 insertions(+), 59 deletions(-)
create mode 100644 newsfragments/2735.feature
diff --git a/newsfragments/2735.feature b/newsfragments/2735.feature
new file mode 100644
index 00000000000..03213c99193
--- /dev/null
+++ b/newsfragments/2735.feature
@@ -0,0 +1 @@
+Implemented multi-panel polygon masking in dials.image_viewer
diff --git a/src/dials/util/image_viewer/mask_frame.py b/src/dials/util/image_viewer/mask_frame.py
index deb386b54b0..d625ac86afe 100644
--- a/src/dials/util/image_viewer/mask_frame.py
+++ b/src/dials/util/image_viewer/mask_frame.py
@@ -6,6 +6,8 @@
from wx.lib.agw.floatspin import EVT_FLOATSPIN, FloatSpin
import wxtbx
+from scitbx import matrix
+from scitbx.array_family import flex
from wxtbx import metallicbutton
from wxtbx.phil_controls import EVT_PHIL_CONTROL
from wxtbx.phil_controls.floatctrl import FloatCtrl as _FloatCtrl
@@ -13,6 +15,7 @@
from wxtbx.phil_controls.strctrl import StrCtrl
import dials.util.masking
+from dials.algorithms.polygon import clip
class FloatCtrl(_FloatCtrl):
@@ -576,8 +579,6 @@ def OnUpdate(self, event):
self._resolution_range_d_min = 0
self._resolution_range_d_max = 0
- from dials.util import masking
-
untrusted_rectangle = self.untrusted_rectangle_ctrl.GetValue().strip()
if len(untrusted_rectangle.strip()) > 0:
rectangle = untrusted_rectangle.strip().replace(",", " ").split(" ")
@@ -588,7 +589,7 @@ def OnUpdate(self, event):
except Exception:
pass
else:
- untrusted = masking.phil_scope.extract().untrusted[0]
+ untrusted = dials.util.masking.phil_scope.extract().untrusted[0]
untrusted.panel = panel
untrusted.rectangle = rectangle
self.params.masking.untrusted.append(untrusted)
@@ -604,7 +605,7 @@ def OnUpdate(self, event):
except Exception:
pass
else:
- untrusted = masking.phil_scope.extract().untrusted[0]
+ untrusted = dials.util.masking.phil_scope.extract().untrusted[0]
untrusted.panel = panel
untrusted.polygon = polygon
self.params.masking.untrusted.append(untrusted)
@@ -619,7 +620,7 @@ def OnUpdate(self, event):
except Exception:
pass
else:
- untrusted = masking.phil_scope.extract().untrusted[0]
+ untrusted = dials.util.masking.phil_scope.extract().untrusted[0]
untrusted.panel = panel
untrusted.circle = circle
self.params.masking.untrusted.append(untrusted)
@@ -672,6 +673,8 @@ def UpdateMask(self):
)
image_viewer_frame.mask_image_viewer = mask
image_viewer_frame.update_settings(layout=False)
+ if image_viewer_frame.settings.show_mask:
+ image_viewer_frame.reload_image()
def OnLeftDown(self, event):
if not event.ShiftDown():
@@ -686,26 +689,6 @@ def OnLeftDown(self, event):
return
elif self._mode_polygon:
xgeo, ygeo = self._pyslip.ConvertView2Geo(click_posn)
- xc, yc = self._pyslip.tiles.map_relative_to_picture_fast_slow(
- xgeo, ygeo
- )
- p1, p0, p_id = self._pyslip.tiles.flex_image.picture_to_readout(yc, xc)
-
- if p_id < 0:
- return
-
- # polygon must be within a single panel
- if len(self._mode_polygon_points) > 0:
- xgeo0, ygeo0 = self._mode_polygon_points[0]
- xc0, yc0 = self._pyslip.tiles.map_relative_to_picture_fast_slow(
- xgeo0, ygeo0
- )
- _, _, p_id0 = self._pyslip.tiles.flex_image.picture_to_readout(
- yc0, xc0
- )
-
- if p_id0 != p_id:
- return
self._mode_polygon_points.append((xgeo, ygeo))
self.DrawPolygon(self._mode_polygon_points)
@@ -794,8 +777,6 @@ def DrawCircle(self, xc, yc, xedge, yedge):
xc, yc = self._pyslip.ConvertView2Geo((xc, yc))
xedge, yedge = self._pyslip.ConvertView2Geo((xedge, yedge))
- from scitbx import matrix
-
center = matrix.col((xc, yc))
edge = matrix.col((xedge, yedge))
r = (center - edge).length()
@@ -849,36 +830,120 @@ def DrawPolygon(self, vertices):
)
def AddUntrustedPolygon(self, vertices):
- if len(vertices) < 4:
+ if len(vertices) < 3:
return
+
+ # flex_image works in (slow, fast) coordinates, while others are in (fast, slow).
vertices.append(vertices[0])
vertices = [
self._pyslip.tiles.map_relative_to_picture_fast_slow(*v) for v in vertices
]
+ flex_vertices = flex.vec2_double(vertices)
+
+ numerical_fudges = [
+ (0, 0),
+ (-1, 0),
+ (+1, 0),
+ (0, -1),
+ (0, +1),
+ (+1, +1),
+ (+1, -1),
+ (-1, +1),
+ (-1, -1),
+ ]
- point_ = []
- panel_id = None
- for p in vertices:
- p1, p0, p_id = self._pyslip.tiles.flex_image.picture_to_readout(p[1], p[0])
- assert p_id >= 0, "Point must be within a panel"
- if panel_id is not None:
- assert (
- panel_id == p_id
- ), "All points must be contained within a single panel"
- panel_id = p_id
- point_.append((p0, p1))
- vertices = point_
-
- from libtbx.utils import flat_list
-
- from dials.util import masking
-
- region = masking.phil_scope.extract().untrusted[0]
- points = flat_list(vertices)
- region.polygon = [int(p) for p in points]
- region.panel = panel_id
-
- self.params.masking.untrusted.append(region)
+ for i, panel in enumerate(self._pyslip.tiles.raw_image.get_detector()):
+ # Get sensor bounding boxes in the projected coordinate system
+ panel_size = panel.get_image_size()
+ panel_corners = [
+ (0, 0),
+ (panel_size[0], 0),
+ (panel_size[0], panel_size[1]),
+ (0, panel_size[1]),
+ ]
+
+ panel_corners_picture = []
+ for corner in panel_corners:
+ p = self._pyslip.tiles.flex_image.tile_readout_to_picture(
+ i, corner[1], corner[0]
+ )
+ panel_corners_picture.append((p[1], p[0]))
+ panel_corners_picture = flex.vec2_double(panel_corners_picture)
+
+ # and intersect with the user-drawn polygon.
+ clipped = clip.simple_with_convex(flex_vertices, panel_corners_picture)
+ # The order matters for clipping; so we try the other if the first order failed.
+ if len(clipped) == 0:
+ clipped = clip.simple_with_convex(
+ flex_vertices, panel_corners_picture.reversed()
+ )
+ if len(clipped) > 0:
+ # print("Input vertices", vertices)
+ # print("Intersection with panel", i, list(clipped))
+ # print("Panel edges", list(panel_corners_picture))
+
+ points = []
+ for p in clipped:
+ if p in flex_vertices:
+ # p is a vertex provided by a user; treat as is.
+ p1_best, p0_best, p_id = (
+ self._pyslip.tiles.flex_image.picture_to_readout(p[1], p[0])
+ )
+ assert p_id == i
+ else:
+ # p is a vertex generated by the clipping procedure.
+ # Thus, p must be on an edge of a panel.
+ # Because of numerical errors in converting between the projected coordinate system
+ # and the panel coordinate system, we have to allow some errors.
+ # Otherwise the point might be outside a panel.
+ # Moving around one pixel is enough to bring the point inside (on an edge of) a panel.
+
+ n_touching_edges_best = 0
+ for f in numerical_fudges:
+ p1, p0, p_id = (
+ self._pyslip.tiles.flex_image.picture_to_readout(
+ p[1] + f[1], p[0] + f[0]
+ )
+ )
+ if p_id != i:
+ continue
+ p1, p0 = round(p1), round(p0)
+
+ # In the polygon masking code, an integer coordinate means a corner of a pixel,
+ # while a pixel's center (0.5, 0.5) must be within the polygon for the pixel to be masked.
+ # Thus, to fully mask a panel edge, we must add one.
+ n_touching_edges = 0
+ if p0 == 0:
+ n_touching_edges += 1
+ elif p0 == panel_size[0] - 1:
+ n_touching_edges += 1
+ p0 = panel_size[0]
+
+ if p1 == 0:
+ n_touching_edges += 1
+ elif p1 == panel_size[1] - 1:
+ n_touching_edges += 1
+ p1 = panel_size[1]
+
+ if n_touching_edges > n_touching_edges_best:
+ p1_best, p0_best = p1, p0
+ n_touching_edges_best = n_touching_edges
+ assert n_touching_edges_best > 0
+
+ # print("Added vertex", p_id, p1_best, p0_best)
+ points.append((p0_best, p1_best))
+
+ # polygon masking does not allow triangles so make it a quadrilateral.
+ if len(points) == 3:
+ points.append(points[0])
+
+ from libtbx.utils import flat_list
+
+ region = dials.util.masking.phil_scope.extract().untrusted[0]
+ region.polygon = [int(p) for p in flat_list(points)]
+ region.panel = i
+
+ self.params.masking.untrusted.append(region)
def AddUntrustedRectangle(self, x0, y0, x1, y1):
x0, y0 = self._pyslip.ConvertView2Geo((x0, y0))
@@ -927,9 +992,7 @@ def AddUntrustedRectangle(self, x0, y0, x1, y1):
x1 = min(panel.get_image_size()[0], x1)
y1 = min(panel.get_image_size()[1], y1)
- from dials.util import masking
-
- region = masking.phil_scope.extract().untrusted[0]
+ region = dials.util.masking.phil_scope.extract().untrusted[0]
region.rectangle = [int(x0), int(x1), int(y0), int(y1)]
region.panel = panel_id
@@ -958,17 +1021,13 @@ def AddUntrustedCircle(self, xc, yc, xedge, yedge):
(xc, yc), (xedge, yedge) = points
- from scitbx import matrix
-
center = matrix.col((xc, yc))
edge = matrix.col((xedge, yedge))
r = (center - edge).length()
if r == 0:
return
- from dials.util import masking
-
- region = masking.phil_scope.extract().untrusted[0]
+ region = dials.util.masking.phil_scope.extract().untrusted[0]
region.circle = [int(xc), int(yc), int(r)]
region.panel = panel_id
From 506c495b4cbdb074b21a363251523226ef1913d5 Mon Sep 17 00:00:00 2001
From: biochem_fan
Date: Fri, 20 Sep 2024 00:29:47 +0900
Subject: [PATCH 04/16] dials.image_viewer: fix stacking of masks and stacking
of multiple experiments (#2730)
Fixes #1512 and fixes #2724
---
newsfragments/2730.bugfix | 1 +
.../util/image_viewer/spotfinder_frame.py | 22 ++++++++++++++-----
2 files changed, 17 insertions(+), 6 deletions(-)
create mode 100644 newsfragments/2730.bugfix
diff --git a/newsfragments/2730.bugfix b/newsfragments/2730.bugfix
new file mode 100644
index 00000000000..63762abc94c
--- /dev/null
+++ b/newsfragments/2730.bugfix
@@ -0,0 +1 @@
+Fixed stacking of masks and stacking of multiple experiments (e.g. stills) in dials.image_viewer (#1512, #2724)
diff --git a/src/dials/util/image_viewer/spotfinder_frame.py b/src/dials/util/image_viewer/spotfinder_frame.py
index a68b0dd409f..39ff2705f89 100644
--- a/src/dials/util/image_viewer/spotfinder_frame.py
+++ b/src/dials/util/image_viewer/spotfinder_frame.py
@@ -1082,28 +1082,38 @@ def stack_images(self):
if not isinstance(image_data, tuple):
image_data = (image_data,)
- i_frame = self.image_chooser.GetClientData(
- self.image_chooser.GetSelection()
- ).index
- imageset = self.images.selected.image_set
+ if self.params.show_mask:
+ masks = tuple(i == MASK_VAL for i in image_data)
+ i_frame = self.image_chooser.GetSelection()
for i in range(1, self.params.stack_images):
- if (i_frame + i) >= len(imageset):
+ if (i_frame + i) >= len(self.images):
break
- image_data_i = imageset[i_frame + i]
+
+ image_data_i = self.images[i_frame + i].get_image_data()
for j, rd in enumerate(image_data):
data = image_data_i[j]
+
if mode == "max":
sel = data > rd
rd = rd.as_1d().set_selected(sel.as_1d(), data.as_1d())
else:
rd += data
+ if self.params.show_mask:
+ image_masks = self.images[i_frame + i].get_mask()
+ for merged_mask, image_mask in zip(masks, image_masks):
+ merged_mask.set_selected(~image_mask, True)
+
# /= stack_images to put on consistent scale with single image
# so that -1 etc. handled correctly (mean mode)
if mode == "mean":
image_data = tuple(i / self.params.stack_images for i in image_data)
+ if self.params.show_mask:
+ for rd, mask in zip(image_data, masks):
+ rd = rd.as_1d().set_selected(mask.as_1d(), MASK_VAL)
+
# Don't show summed images with overloads
self.pyslip.tiles.set_image_data(image_data, show_saturated=False)
From e3cda10401b649dc93ccf9f585b45e134486bbc8 Mon Sep 17 00:00:00 2001
From: Nicholas Devenish
Date: Thu, 19 Sep 2024 16:40:02 +0100
Subject: [PATCH 05/16] Indexing: Don't ignore errors (via Inspect print(e))
(#2736)
* Inspect/fix instances of print(e)
Fixes #2260
* Rename newsfragments/XXX.misc to newsfragments/2736.misc
---------
Co-authored-by: DiamondLightSource-build-server
---
newsfragments/2736.misc | 2 ++
src/dials/command_line/index.py | 1 +
src/dials/command_line/report.py | 2 +-
3 files changed, 4 insertions(+), 1 deletion(-)
create mode 100644 newsfragments/2736.misc
diff --git a/newsfragments/2736.misc b/newsfragments/2736.misc
new file mode 100644
index 00000000000..9fe67252229
--- /dev/null
+++ b/newsfragments/2736.misc
@@ -0,0 +1,2 @@
+Inspect/address uses of print(e)
+
diff --git a/src/dials/command_line/index.py b/src/dials/command_line/index.py
index 08d11cb7f5c..3236baefe54 100644
--- a/src/dials/command_line/index.py
+++ b/src/dials/command_line/index.py
@@ -226,6 +226,7 @@ def index(experiments, reflections, params):
idx_expts, idx_refl = future.result()
except Exception as e:
print(e)
+ raise
else:
if idx_expts is None:
continue
diff --git a/src/dials/command_line/report.py b/src/dials/command_line/report.py
index 4395fb8a1a6..9970f74b66c 100644
--- a/src/dials/command_line/report.py
+++ b/src/dials/command_line/report.py
@@ -2189,7 +2189,7 @@ def __call__(self, rlist=None, experiments=None):
scaling_tables,
) = merging_stats_data(rlist, experiments)
except DialsMergingStatisticsError as e:
- print(e)
+ print(f"Error merging stats data: {e}")
else:
json_data["resolution_graphs"] = resolution_plots
json_data["xtriage_output"] = xtriage_output
From 4abfd842e9516bdd95ba769904eac0049ddba35b Mon Sep 17 00:00:00 2001
From: David Waterman
Date: Mon, 23 Sep 2024 16:11:10 +0100
Subject: [PATCH 06/16] Fix transpose error with elliptical distortion maps for
non-square panels (#2740)
* Fix transposition of 2D correction arrays
Fixes #2739
* Make test more general with non-square panels
* news
---
newsfragments/2740.bugfix | 1 +
src/dials/command_line/generate_distortion_maps.py | 4 ++--
tests/command_line/test_generate_distortion_maps.py | 8 +++++---
3 files changed, 8 insertions(+), 5 deletions(-)
create mode 100644 newsfragments/2740.bugfix
diff --git a/newsfragments/2740.bugfix b/newsfragments/2740.bugfix
new file mode 100644
index 00000000000..34a02e6c142
--- /dev/null
+++ b/newsfragments/2740.bugfix
@@ -0,0 +1 @@
+``dials.generate_distortion_maps``: fix bug with ``mode=ellipse`` for detectors with oblong panels.
diff --git a/src/dials/command_line/generate_distortion_maps.py b/src/dials/command_line/generate_distortion_maps.py
index 1ab6b415a07..007c098c9c2 100644
--- a/src/dials/command_line/generate_distortion_maps.py
+++ b/src/dials/command_line/generate_distortion_maps.py
@@ -139,8 +139,8 @@ def make_dx_dy_ellipse(imageset, phi, l1, l2, centre_xy):
for panel in detector:
size_x, size_y = panel.get_pixel_size()
nx, ny = panel.get_image_size()
- dx = flex.double(flex.grid(nx, ny), 0.0)
- dy = flex.double(flex.grid(nx, ny), 0.0)
+ dx = flex.double(flex.grid(ny, nx), 0.0)
+ dy = flex.double(flex.grid(ny, nx), 0.0)
elt = 0
for j in range(ny):
for i in range(nx):
diff --git a/tests/command_line/test_generate_distortion_maps.py b/tests/command_line/test_generate_distortion_maps.py
index c6d36ea704b..09244b89664 100644
--- a/tests/command_line/test_generate_distortion_maps.py
+++ b/tests/command_line/test_generate_distortion_maps.py
@@ -20,7 +20,7 @@ def make_detector():
quickly"""
pixel_size_x = 0.1
pixel_size_y = 0.1
- npixels_per_panel_x = 50
+ npixels_per_panel_x = 40
npixels_per_panel_y = 50
distance = 100
fast = matrix.col((1, 0, 0))
@@ -141,6 +141,7 @@ def test_elliptical_distortion(run_in_tmp_path):
# All together expect the 4 dy maps to look something like this:
#
# /-----------\ /-----------\
+ # |-4 -4 -4 -4| |-4 -4 -4 -4|
# |-3 -3 -3 -3| |-3 -3 -3 -3|
# |-2 -2 -2 -2| |-2 -2 -2 -2|
# |-1 -1 -1 -1| |-1 -1 -1 -1|
@@ -151,6 +152,7 @@ def test_elliptical_distortion(run_in_tmp_path):
# | 1 1 1 1| | 1 1 1 1|
# | 2 2 2 2| | 2 2 2 2|
# | 3 3 3 3| | 3 3 3 3|
+ # | 4 4 4 4| | 4 4 4 4|
# \-----------/ \-----------/
# So the fundamental data is all in the first column of first panel's map
@@ -170,7 +172,7 @@ def test_elliptical_distortion(run_in_tmp_path):
assert col0[0] == pytest.approx(corr_px)
# Test (1) from above list for panel 0
- for i in range(1, 50):
+ for i in range(1, d[0].get_image_size()[0]):
assert (col0 == dy[0].matrix_copy_column(i)).all_eq(True)
# Test (2)
@@ -184,5 +186,5 @@ def test_elliptical_distortion(run_in_tmp_path):
# Test (1) for panel 2 as well, which then covers everything needed
col0 = dy[2].matrix_copy_column(0)
- for i in range(1, 50):
+ for i in range(1, d[0].get_image_size()[0]):
assert (col0 == dy[2].matrix_copy_column(i)).all_eq(True)
From 7e48a28a193f87a184cadaa7d5a7d3dc79bed5a0 Mon Sep 17 00:00:00 2001
From: David McDonagh <60879630+toastisme@users.noreply.github.com>
Date: Wed, 25 Sep 2024 11:53:49 +0100
Subject: [PATCH 07/16] Laue refinement refactoring (#2742)
* Refactor how Laue and TOF refinement classes are selected during refinement by using ExperimentType
---
newsfragments/2742.misc | 1 +
.../prediction/managed_predictors.py | 23 +++++++------------
src/dials/algorithms/refinement/target.py | 16 +++++--------
3 files changed, 15 insertions(+), 25 deletions(-)
create mode 100644 newsfragments/2742.misc
diff --git a/newsfragments/2742.misc b/newsfragments/2742.misc
new file mode 100644
index 00000000000..9798422a57a
--- /dev/null
+++ b/newsfragments/2742.misc
@@ -0,0 +1 @@
+Simplify logic in refinement for selecting Laue and TOF refinement classes by using ExperimentType.
diff --git a/src/dials/algorithms/refinement/prediction/managed_predictors.py b/src/dials/algorithms/refinement/prediction/managed_predictors.py
index 215a56e542c..5dcb428a624 100644
--- a/src/dials/algorithms/refinement/prediction/managed_predictors.py
+++ b/src/dials/algorithms/refinement/prediction/managed_predictors.py
@@ -213,31 +213,24 @@ def _post_predict_one_experiment(self, experiment, reflections):
class ExperimentsPredictorFactory:
@staticmethod
def from_experiments(experiments, force_stills=False, spherical_relp=False):
- # Determine whether or not to use a stills predictor
+ assert experiments.all_same_type(), "Cannot create ExperimentsPredictor for a mixture of experiments with different types"
+
+ if experiments.all_tof():
+ return TOFExperimentsPredictor(experiments)
+ elif experiments.all_laue():
+ return LaueExperimentsPredictor(experiments)
+
if not force_stills:
for exp in experiments:
if exp.goniometer is None:
force_stills = True
break
- # Construct the predictor
if force_stills:
predictor = StillsExperimentsPredictor(experiments)
predictor.spherical_relp_model = spherical_relp
else:
- all_tof_experiments = False
- for expt in experiments:
- if expt.scan is not None and expt.scan.has_property("time_of_flight"):
- all_tof_experiments = True
- elif all_tof_experiments:
- raise ValueError(
- "Cannot create ExperimentsPredictor for ToF and non-ToF experiments at the same time"
- )
-
- if all_tof_experiments:
- predictor = TOFExperimentsPredictor(experiments)
- else:
- predictor = ScansExperimentsPredictor(experiments)
+ predictor = ScansExperimentsPredictor(experiments)
return predictor
diff --git a/src/dials/algorithms/refinement/target.py b/src/dials/algorithms/refinement/target.py
index 6c4f2687dff..84149b73fd6 100644
--- a/src/dials/algorithms/refinement/target.py
+++ b/src/dials/algorithms/refinement/target.py
@@ -76,20 +76,16 @@ def from_parameters_and_experiments(
+ " not recognised"
)
- all_tof_experiments = False
- for expt in experiments:
- if expt.scan is not None and expt.scan.has_property("time_of_flight"):
- all_tof_experiments = True
- elif all_tof_experiments:
- raise ValueError(
- "Cannot refine ToF and non-ToF experiments at the same time"
- )
-
- if all_tof_experiments:
+ if experiments.all_tof():
from dials.algorithms.refinement.target import (
TOFLeastSquaresResidualWithRmsdCutoff as targ,
)
+ elif experiments.all_laue():
+ from dials.algorithms.refinement.target import (
+ LaueLeastSquaresResidualWithRmsdCutoff as targ,
+ )
+
# Determine whether the target is in X, Y, Phi space or just X, Y to choose
# the right Target to instantiate
elif do_stills:
From b491c224e0a08e5136e2fdac44d32ed9a76c0662 Mon Sep 17 00:00:00 2001
From: David Waterman
Date: Sat, 28 Sep 2024 17:38:50 +0100
Subject: [PATCH 08/16] Revert to an older release of micromamba to get builds
working again
See https://github.com/mamba-org/mamba/issues/3393
---
installer/bootstrap.py | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/installer/bootstrap.py b/installer/bootstrap.py
index fe9c77690b6..a8b64be2877 100755
--- a/installer/bootstrap.py
+++ b/installer/bootstrap.py
@@ -109,7 +109,7 @@ def install_micromamba(python, cmake):
raise NotImplementedError(
"Unsupported platform %s / %s" % (os.name, sys.platform)
)
- url = "https://micro.mamba.pm/api/micromamba/{0}/latest".format(conda_arch)
+ url = "https://micromamba.snakepit.net/api/micromamba/{0}/1.5.10".format(conda_arch)
mamba_prefix = os.path.realpath("micromamba")
clean_env["MAMBA_ROOT_PREFIX"] = mamba_prefix
mamba = os.path.join(mamba_prefix, member.split("/")[-1])
From 4e3b4e972dd9f942232881aad3dea9c261de16db Mon Sep 17 00:00:00 2001
From: Nicholas Devenish
Date: Thu, 3 Oct 2024 08:59:22 +0100
Subject: [PATCH 09/16] MNT: Update README chat badge
We don't use gitter any more.
---
README.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/README.md b/README.md
index 46c0ca3b636..d5cf8f6b9d9 100644
--- a/README.md
+++ b/README.md
@@ -2,7 +2,7 @@
![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)
[![Coverage](https://codecov.io/gh/dials/dials/branch/main/graph/badge.svg)](https://codecov.io/gh/dials/dials)
-[![Gitter](https://badges.gitter.im/dials/community.svg)](https://gitter.im/dials/community?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge)
+[![Slack](https://img.shields.io/badge/chat-Slack-green)](https://join.slack.com/t/dials-support/shared_invite/zt-21fvg5n53-SG~882mWRs189GSMuemPfg)
X-ray crystallography for structural biology has benefited greatly from a number of advances in recent years including high performance pixel array detectors, new beamlines capable of delivering micron and sub-micron focus and new light sources such as XFELs. The DIALS project is a collaborative endeavour to develop new diffraction integration software to meet the data analysis requirements presented by these recent advances. There are three end goals: to develop an extensible framework for the development of algorithms to analyse X-ray diffraction data; the implementation of algorithms within this framework and finally a set of user facing tools using these algorithms to allow integration of data from diffraction experiments on synchrotron and free electron sources.
From cf052eb91166bdc537b1a6d92c928b882c5f8a99 Mon Sep 17 00:00:00 2001
From: Nicholas Devenish
Date: Thu, 3 Oct 2024 16:11:39 +0100
Subject: [PATCH 10/16] Make CMake the default build mode for bootstrap (#2755)
Old, full behaviour can be replicated with --libtbx.
And, other general maintenance to fix CI:
* Invert CI pipelines, and work around missing dispatchers
* Remove now-vestigial libtbx.dispatcher.script declaration
* Dynamically choose core count for parallel build
* Speed up build-refresh stage by installing all packages at once
* Update wxpython to fix windows build issue
---
.azure-pipelines/unix-build-cmake.yml | 19 ++++++++++-----
.azure-pipelines/unix-build.yml | 8 +++---
.azure-pipelines/windows-build.yml | 2 +-
.conda-envs/windows.txt | 2 +-
installer/bootstrap.py | 35 +++++++++++++++++++++------
newsfragments/2755.feature | 1 +
setup.py | 1 -
7 files changed, 48 insertions(+), 20 deletions(-)
create mode 100644 newsfragments/2755.feature
diff --git a/.azure-pipelines/unix-build-cmake.yml b/.azure-pipelines/unix-build-cmake.yml
index c555580b629..3ecba30d353 100644
--- a/.azure-pipelines/unix-build-cmake.yml
+++ b/.azure-pipelines/unix-build-cmake.yml
@@ -28,7 +28,7 @@ steps:
dials-data
pytest-cov
pytest-timeout" >> modules/dials/${{ parameters.conda_environment }}
- python modules/dials/installer/bootstrap.py base --clean --python $(PYTHON_VERSION) --cmake
+ python modules/dials/installer/bootstrap.py base --clean --python $(PYTHON_VERSION)
displayName: Create python $(PYTHON_VERSION) environment
workingDirectory: $(Pipeline.Workspace)
@@ -37,7 +37,9 @@ steps:
# Build and install dxtbx
- script: |
- source activate conda_base/
+ source conda_base/etc/profile.d/conda.sh
+ conda activate conda_base/
+ set -euo pipefail
git clone https://github.com/cctbx/dxtbx ./modules/dxtbx
mkdir build_dxtbx
cd build_dxtbx
@@ -52,8 +54,10 @@ steps:
# Build DIALS using the bootstrap script
- bash: |
- source activate conda_base/
- set -eux
+ source conda_base/etc/profile.d/conda.sh
+ conda activate conda_base/
+ set -euxo pipefail
+ set -x
export CXXFLAGS="-isystem$(Pipeline.Workspace)/conda_base ${CXXFLAGS:-}"
mkdir build_dials
cd build_dials
@@ -68,7 +72,9 @@ steps:
# Extract the dials-data version so we can correctly cache regression data.
- bash: |
- source activate conda_base/
+ source conda_base/etc/profile.d/conda.sh
+ conda activate conda_base/
+ set -euxo pipefail
echo "##vso[task.setvariable variable=DIALS_DATA_VERSION_FULL]$(dials.data info -v | grep version.full)"
echo "##vso[task.setvariable variable=DIALS_DATA_VERSION]$(dials.data info -v | grep version.major_minor)"
mkdir -p data
@@ -97,7 +103,8 @@ steps:
# Finally, run the full regression test suite
- bash: |
- source activate conda_base/
+ source conda_base/etc/profile.d/conda.sh
+ conda activate conda_base/
set -eux
export DIALS_DATA=${PWD}/data
export PYTHONDEVMODE=1
diff --git a/.azure-pipelines/unix-build.yml b/.azure-pipelines/unix-build.yml
index c02d547c286..1404b16e5e4 100644
--- a/.azure-pipelines/unix-build.yml
+++ b/.azure-pipelines/unix-build.yml
@@ -17,14 +17,14 @@ steps:
# Download source repositories using the bootstrap script
- bash: |
set -eux
- python modules/dials/installer/bootstrap.py update
+ python modules/dials/installer/bootstrap.py update --libtbx
displayName: Repository checkout
workingDirectory: $(Pipeline.Workspace)
# Create a new conda environment using the bootstrap script
- script: |
set -eux
- python modules/dials/installer/bootstrap.py base --clean --python $(PYTHON_VERSION)
+ python modules/dials/installer/bootstrap.py base --clean --python $(PYTHON_VERSION) --libtbx
displayName: Create python $(PYTHON_VERSION) environment
workingDirectory: $(Pipeline.Workspace)
@@ -46,7 +46,7 @@ steps:
- bash: |
set -eux
export CXXFLAGS="-isystem$(Pipeline.Workspace)/conda_base ${CXXFLAGS:-}"
- python modules/dials/installer/bootstrap.py build --config-flags=--use_environment_flags
+ python modules/dials/installer/bootstrap.py build --config-flags=--use_environment_flags --libtbx
displayName: DIALS build
workingDirectory: $(Pipeline.Workspace)
@@ -94,7 +94,7 @@ steps:
# A conflict between new setuptools and matplotlib causes test failures due to warnings in subprocesses
export PYTHONWARNINGS='ignore:pkg_resources is deprecated as an API:DeprecationWarning,ignore:Deprecated call to `pkg_resources.declare_namespace:DeprecationWarning'
cd modules/dials
- pytest -v -ra -n auto --basetemp="$(Pipeline.Workspace)/tests" --durations=10 --dist loadgroup\
+ libtbx.python -m pytest -v -ra -n auto --basetemp="$(Pipeline.Workspace)/tests" --durations=10 --dist loadgroup\
--cov=$(pwd) --cov-report=html --cov-report=xml --cov-branch \
--timeout=5400 --regression || echo "##vso[task.complete result=Failed;]Some tests failed"
displayName: Run tests
diff --git a/.azure-pipelines/windows-build.yml b/.azure-pipelines/windows-build.yml
index 1d5cae2278f..224dac0c2ff 100644
--- a/.azure-pipelines/windows-build.yml
+++ b/.azure-pipelines/windows-build.yml
@@ -29,7 +29,7 @@ steps:
mv ci-conda-env.txt modules/dials/.conda-envs/windows.txt
- python3 modules/dials/installer/bootstrap.py base --clean --python $(PYTHON_VERSION) --cmake
+ python3 modules/dials/installer/bootstrap.py base --clean --python $(PYTHON_VERSION)
displayName: Create python $(PYTHON_VERSION) environment
workingDirectory: $(Pipeline.Workspace)
diff --git a/.conda-envs/windows.txt b/.conda-envs/windows.txt
index bdaf492ff9f..374071d43a1 100644
--- a/.conda-envs/windows.txt
+++ b/.conda-envs/windows.txt
@@ -51,6 +51,6 @@ conda-forge::sqlite
conda-forge::tabulate
conda-forge::tqdm
conda-forge::urllib3
-conda-forge::wxpython>=4.2.0
+conda-forge::wxpython>=4.2.2
conda-forge::xz
conda-forge::zlib
diff --git a/installer/bootstrap.py b/installer/bootstrap.py
index a8b64be2877..3965d21afe0 100755
--- a/installer/bootstrap.py
+++ b/installer/bootstrap.py
@@ -24,6 +24,7 @@
import threading
import time
import zipfile
+import multiprocessing
try: # Python 3
from urllib.error import HTTPError, URLError
@@ -1016,9 +1017,20 @@ def _get_cmake_exe():
def refresh_build_cmake():
conda_python = _get_base_python()
- run_indirect_command(conda_python, ["-mpip", "install", "-e", "../modules/dxtbx"])
- run_indirect_command(conda_python, ["-mpip", "install", "-e", "../modules/dials"])
- run_indirect_command(conda_python, ["-mpip", "install", "-e", "../modules/xia2"])
+ run_indirect_command(
+ conda_python,
+ [
+ "-mpip",
+ "install",
+ "--no-deps",
+ "-e",
+ "../modules/dxtbx",
+ "-e",
+ "../modules/dials",
+ "-e",
+ "../modules/xia2",
+ ],
+ )
def configure_build_cmake():
@@ -1168,7 +1180,15 @@ def make_build_cmake():
if os.name == "nt":
run_indirect_command(cmake_exe, ["--build", ".", "--config", "RelWithDebInfo"])
else:
- run_indirect_command(cmake_exe, ["--build", "."])
+ parallel = []
+ if "CMAKE_GENERATOR" not in os.environ:
+ if hasattr(os, "sched_getaffinity"):
+ cpu = os.sched_getaffinity()
+ else:
+ cpu = multiprocessing.cpu_count()
+ if isinstance(cpu, int):
+ parallel = ["--parallel", str(cpu)]
+ run_indirect_command(cmake_exe, ["--build", "."] + parallel)
def repository_at_tag(string):
@@ -1263,9 +1283,10 @@ def run():
action="store_true",
)
parser.add_argument(
- "--cmake",
- help="Use the CMake build system. Implies use of a prebuilt cctbx.",
- action="store_true",
+ "--libtbx",
+ help="Use the libtbx build system, compiling cctbx from scratch.",
+ action="store_false",
+ dest="cmake",
)
options = parser.parse_args()
diff --git a/newsfragments/2755.feature b/newsfragments/2755.feature
new file mode 100644
index 00000000000..ecc7d52676a
--- /dev/null
+++ b/newsfragments/2755.feature
@@ -0,0 +1 @@
+Make CMake the default build mode of bootstrap.
diff --git a/setup.py b/setup.py
index 02d25584816..5ec84a6f029 100644
--- a/setup.py
+++ b/setup.py
@@ -51,7 +51,6 @@
],
"entry_points": {
"libtbx.precommit": ["dials=dials"],
- "libtbx.dispatcher.script": ["pytest=pytest"],
"dxtbx.profile_model": [
"gaussian_rs = dials.extensions.gaussian_rs_profile_model_ext:GaussianRSProfileModelExt",
"ellipsoid = dials.extensions.ellipsoid_profile_model_ext:EllipsoidProfileModelExt",
From 0a5f92bdf5c7f7edb90a1363cb65241f63843313 Mon Sep 17 00:00:00 2001
From: Nicholas Devenish
Date: Thu, 3 Oct 2024 20:03:57 +0100
Subject: [PATCH 11/16] Remove --cmake flag from Dockerfile
---
Dockerfile | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/Dockerfile b/Dockerfile
index 63efca8a409..8aea9d9c0b0 100644
--- a/Dockerfile
+++ b/Dockerfile
@@ -7,7 +7,7 @@ WORKDIR /dials
COPY installer/bootstrap.py .
ENV PIP_ROOT_USER_ACTION=ignore
ENV CMAKE_GENERATOR=Ninja
-RUN python3 bootstrap.py --cmake
+RUN python3 bootstrap.py
RUN /dials/conda_base/bin/cmake --install build
RUN /dials/conda_base/bin/python3 -mpip install modules/dxtbx modules/dials modules/xia2
From 0f7b3915acbbea3bf6c46684d8a31d22a056b07e Mon Sep 17 00:00:00 2001
From: David Waterman
Date: Thu, 3 Oct 2024 21:38:48 +0100
Subject: [PATCH 12/16] Fix image viewer "Save As" functionality to produce a
PNG (#2759)
---
newsfragments/2759.bugfix | 1 +
src/dials/util/image_viewer/slip_viewer/frame.py | 7 +++++--
2 files changed, 6 insertions(+), 2 deletions(-)
create mode 100644 newsfragments/2759.bugfix
diff --git a/newsfragments/2759.bugfix b/newsfragments/2759.bugfix
new file mode 100644
index 00000000000..f2711bdf390
--- /dev/null
+++ b/newsfragments/2759.bugfix
@@ -0,0 +1 @@
+``dials.image_viewer``: Fix broken "Save As" PNG functionality.
diff --git a/src/dials/util/image_viewer/slip_viewer/frame.py b/src/dials/util/image_viewer/slip_viewer/frame.py
index d905eff064a..6024ff5f456 100644
--- a/src/dials/util/image_viewer/slip_viewer/frame.py
+++ b/src/dials/util/image_viewer/slip_viewer/frame.py
@@ -782,7 +782,7 @@ def OnSaveAs(self, event):
row_list = range(start_y_tile, stop_y_tile)
y_pix_start = start_y_tile * self.pyslip.tile_size_y - y_offset
- bitmap = wx.Bitmap(x2 - x1, y2 - y1)
+ bitmap = wx.Bitmap(int(x2 - x1), int(y2 - y1))
dc = wx.MemoryDC()
dc.SelectObject(bitmap)
@@ -791,7 +791,10 @@ def OnSaveAs(self, event):
y_pix = y_pix_start
for y in row_list:
dc.DrawBitmap(
- self.pyslip.tiles.GetTile(x, y), x_pix, y_pix, False
+ self.pyslip.tiles.GetTile(x, y),
+ int(x_pix),
+ int(y_pix),
+ False,
)
y_pix += self.pyslip.tile_size_y
x_pix += self.pyslip.tile_size_x
From 66d98c339ae4c1fb17cf1e629d9b4ab283e14b8c Mon Sep 17 00:00:00 2001
From: Nicholas Devenish
Date: Mon, 7 Oct 2024 10:12:09 +0100
Subject: [PATCH 13/16] MNT: Fix bootstrap failing in certain configuraitons
- Windows could fail to find the correct python or HDF5
- When sched_getaffinity was called, it used the wrong signature
---
installer/bootstrap.py | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/installer/bootstrap.py b/installer/bootstrap.py
index 3965d21afe0..ef729a07c75 100755
--- a/installer/bootstrap.py
+++ b/installer/bootstrap.py
@@ -1112,7 +1112,8 @@ def configure_build_cmake():
[
"../modules",
"-DCMAKE_INSTALL_PREFIX=" + conda_base_root,
- "-DHDF5_ROOT=" + conda_base_root,
+ "-DHDF5_DIR=" + conda_base_root,
+ "-DPython_ROOT_DIR=" + conda_base_root,
]
+ extra_args,
)
@@ -1183,7 +1184,7 @@ def make_build_cmake():
parallel = []
if "CMAKE_GENERATOR" not in os.environ:
if hasattr(os, "sched_getaffinity"):
- cpu = os.sched_getaffinity()
+ cpu = os.sched_getaffinity(0)
else:
cpu = multiprocessing.cpu_count()
if isinstance(cpu, int):
From 1ad373bb890146d36cb3380462e9480b817d1216 Mon Sep 17 00:00:00 2001
From: Nicholas Devenish
Date: Mon, 7 Oct 2024 11:47:38 +0100
Subject: [PATCH 14/16] MNT: bootstrap: Print explicit error message for old
flag
---
installer/bootstrap.py | 9 +++++++++
1 file changed, 9 insertions(+)
diff --git a/installer/bootstrap.py b/installer/bootstrap.py
index ef729a07c75..287779e5198 100755
--- a/installer/bootstrap.py
+++ b/installer/bootstrap.py
@@ -1289,8 +1289,17 @@ def run():
action="store_false",
dest="cmake",
)
+ parser.add_argument(
+ "--cmake",
+ action="store_true",
+ dest="removed_cmake",
+ help=argparse.SUPPRESS,
+ )
options = parser.parse_args()
+ if options.removed_cmake:
+ # User passed the obsolete parameter
+ sys.exit("Error: --cmake is now the default, please remove --cmake.")
print("Performing actions:", " ".join(options.actions))
From 7df425c58bc44ee43233813551377e24deb11c62 Mon Sep 17 00:00:00 2001
From: Amy Thompson <52806925+amyjaynethompson@users.noreply.github.com>
Date: Wed, 9 Oct 2024 10:52:25 +0100
Subject: [PATCH 15/16] Clustering dimensions (#2743)
Improvements made to the clustering procedure used in dials.correlation_matrix:
- cc_weights=sigma and weights=standard_error are now default for dataset clustering
- number of dimensions required for cosine-angle clustering automatically optimised (also default behaviour)
- dimension optimisation output to log file
- when calculating the functional, additional outlier rejection added in dials.cosym for stability for cluster analysis only (not used for symmetry determination)
---------
Co-authored-by: James Beilsten-Edmands <30625594+jbeilstenedmands@users.noreply.github.com>
---
newsfragments/2743.feature | 1 +
src/dials/algorithms/correlation/analysis.py | 39 +++++++-
.../algorithms/symmetry/cosym/__init__.py | 98 +++++++++++--------
src/dials/algorithms/symmetry/cosym/target.py | 14 +++
src/dials/command_line/correlation_matrix.py | 5 +-
src/dials/command_line/cosym.py | 3 -
tests/algorithms/correlation/test_analysis.py | 2 +-
7 files changed, 110 insertions(+), 52 deletions(-)
create mode 100644 newsfragments/2743.feature
diff --git a/newsfragments/2743.feature b/newsfragments/2743.feature
new file mode 100644
index 00000000000..910d3ab2164
--- /dev/null
+++ b/newsfragments/2743.feature
@@ -0,0 +1 @@
+``dials.correlation_matrix``: Add dimension optimisation for intensity-based dataset clustering
diff --git a/src/dials/algorithms/correlation/analysis.py b/src/dials/algorithms/correlation/analysis.py
index 5a50e33f29d..9f2df0baa60 100644
--- a/src/dials/algorithms/correlation/analysis.py
+++ b/src/dials/algorithms/correlation/analysis.py
@@ -12,6 +12,7 @@
import iotbx.phil
from dxtbx.model import ExperimentList
+from libtbx import Auto
from libtbx.phil import scope_extract
from scitbx.array_family import flex
@@ -46,9 +47,27 @@
min_reflections = 10
.type = int(value_min=1)
.help = "The minimum number of reflections per experiment."
+
+dimensionality_assessment {
+ outlier_rejection = True
+ .type = bool
+ .help = "Use outlier rejection when determining optimal dimensions for analysis."
+ maximum_dimensions = 50
+ .type = int
+ .help = "Maximum number of dimensions to test for reasonable processing time"
+}
""",
process_includes=True,
)
+phil_overrides = phil_scope.fetch(
+ source=iotbx.phil.parse(
+ """\
+cc_weights=sigma
+weights=standard_error
+"""
+ )
+)
+working_phil = phil_scope.fetch(sources=[phil_overrides])
class CorrelationMatrix:
@@ -122,6 +141,12 @@ def __init__(
self.params.lattice_group = self.datasets[0].space_group_info()
self.params.space_group = self.datasets[0].space_group_info()
+ # If dimensions are optimised for clustering, need cc_weights=sigma
+ # Otherwise results end up being nonsensical even for high-quality data
+ # Outlier rejection was also found to be beneficial for optimising clustering dimensionality
+ if self.params.dimensions is Auto and self.params.cc_weights != "sigma":
+ raise ValueError("To optimise dimensions, cc_weights=sigma is required.")
+
self.cosym_analysis = CosymAnalysis(self.datasets, self.params)
def _merge_intensities(self, datasets: list) -> list:
@@ -182,7 +207,19 @@ def calculate_matrices(self):
self.cosym_analysis._intialise_target()
# Cosym proceedures to calculate the cos-angle matrix
- self.cosym_analysis._determine_dimensions()
+ if (
+ len(self.unmerged_datasets)
+ <= self.params.dimensionality_assessment.maximum_dimensions
+ ):
+ dims_to_test = len(self.unmerged_datasets)
+ else:
+ dims_to_test = self.params.dimensionality_assessment.maximum_dimensions
+
+ if self.params.dimensions is Auto:
+ self.cosym_analysis._determine_dimensions(
+ dims_to_test,
+ outlier_rejection=self.params.dimensionality_assessment.outlier_rejection,
+ )
self.cosym_analysis._optimise(
self.cosym_analysis.params.minimization.engine,
max_iterations=self.cosym_analysis.params.minimization.max_iterations,
diff --git a/src/dials/algorithms/symmetry/cosym/__init__.py b/src/dials/algorithms/symmetry/cosym/__init__.py
index 61eaf588707..15b13af682a 100644
--- a/src/dials/algorithms/symmetry/cosym/__init__.py
+++ b/src/dials/algorithms/symmetry/cosym/__init__.py
@@ -38,6 +38,9 @@
phil_scope = iotbx.phil.parse(
"""\
+seed = 230
+ .type = int(value_min=0)
+
normalisation = kernel quasi *ml_iso ml_aniso
.type = choice
@@ -265,65 +268,73 @@ def _intialise_target(self):
nproc=self.params.nproc,
)
- def _determine_dimensions(self):
- if self.params.dimensions is Auto and self.target.dim == 2:
- self.params.dimensions = 2
- elif self.params.dimensions is Auto:
- logger.info("=" * 80)
- logger.info(
- "\nAutomatic determination of number of dimensions for analysis"
+ def _determine_dimensions(self, dims_to_test, outlier_rejection=False):
+ logger.info("=" * 80)
+ logger.info("\nAutomatic determination of number of dimensions for analysis")
+ dimensions = []
+ functional = []
+ for dim in range(1, dims_to_test + 1):
+ logger.debug("Testing dimension: %i", dim)
+ self.target.set_dimensions(dim)
+ max_calls = self.params.minimization.max_calls
+ self._optimise(
+ self.params.minimization.engine,
+ max_iterations=self.params.minimization.max_iterations,
+ max_calls=min(20, max_calls) if max_calls else max_calls,
)
- dimensions = []
- functional = []
- for dim in range(1, self.target.dim + 1):
- logger.debug("Testing dimension: %i", dim)
- self.target.set_dimensions(dim)
- max_calls = self.params.minimization.max_calls
- self._optimise(
- self.params.minimization.engine,
- max_iterations=self.params.minimization.max_iterations,
- max_calls=min(20, max_calls) if max_calls else max_calls,
+
+ dimensions.append(dim)
+ functional.append(
+ self.target.compute_functional_score_for_dimension_assessment(
+ self.minimizer.x, outlier_rejection
)
- dimensions.append(dim)
- functional.append(self.minimizer.fun)
+ )
+
+ # Find the elbow point of the curve, in the same manner as that used by
+ # distl spotfinder for resolution method 1 (Zhang et al 2006).
+ # See also dials/algorithms/spot_finding/per_image_analysis.py
- # Find the elbow point of the curve, in the same manner as that used by
- # distl spotfinder for resolution method 1 (Zhang et al 2006).
- # See also dials/algorithms/spot_finding/per_image_analysis.py
+ x = np.array(dimensions)
+ y = np.array(functional)
+ slopes = (y[-1] - y[:-1]) / (x[-1] - x[:-1])
+ p_m = slopes.argmin()
- x = np.array(dimensions)
- y = np.array(functional)
- slopes = (y[-1] - y[:-1]) / (x[-1] - x[:-1])
- p_m = slopes.argmin()
+ x1 = matrix.col((x[p_m], y[p_m]))
+ x2 = matrix.col((x[-1], y[-1]))
- x1 = matrix.col((x[p_m], y[p_m]))
- x2 = matrix.col((x[-1], y[-1]))
+ gaps = []
+ v = matrix.col(((x2[1] - x1[1]), -(x2[0] - x1[0]))).normalize()
- gaps = []
- v = matrix.col(((x2[1] - x1[1]), -(x2[0] - x1[0]))).normalize()
+ for i in range(p_m, len(x)):
+ x0 = matrix.col((x[i], y[i]))
+ r = x1 - x0
+ g = abs(v.dot(r))
+ gaps.append(g)
- for i in range(p_m, len(x)):
- x0 = matrix.col((x[i], y[i]))
- r = x1 - x0
- g = abs(v.dot(r))
- gaps.append(g)
+ p_g = np.array(gaps).argmax()
- p_g = np.array(gaps).argmax()
+ x_g = x[p_g + p_m]
- x_g = x[p_g + p_m]
+ logger.info(
+ dials.util.tabulate(
+ zip(dimensions, functional), headers=("Dimensions", "Functional")
+ )
+ )
+ logger.info("Best number of dimensions: %i", x_g)
+ if int(x_g) < 2:
logger.info(
- dials.util.tabulate(
- zip(dimensions, functional), headers=("Dimensions", "Functional")
- )
+ "As a minimum of 2-dimensions is required, dimensions have been set to 2."
)
- logger.info("Best number of dimensions: %i", x_g)
+ self.target.set_dimensions(2)
+ else:
self.target.set_dimensions(int(x_g))
- logger.info("Using %i dimensions for analysis", self.target.dim)
+ logger.info("Using %i dimensions for analysis", self.target.dim)
def run(self):
self._intialise_target()
- self._determine_dimensions()
+ if self.params.dimensions is Auto and self.target.dim != 2:
+ self._determine_dimensions(self.target.dim)
self._optimise(
self.params.minimization.engine,
max_iterations=self.params.minimization.max_iterations,
@@ -335,6 +346,7 @@ def run(self):
@Subject.notify_event(event="optimised")
def _optimise(self, engine, max_iterations=None, max_calls=None):
+ np.random.seed(self.params.seed)
NN = len(set(self.dataset_ids))
n_sym_ops = len(self.target.sym_ops)
diff --git a/src/dials/algorithms/symmetry/cosym/target.py b/src/dials/algorithms/symmetry/cosym/target.py
index 30ad56b8bf5..4498d3fa8b7 100644
--- a/src/dials/algorithms/symmetry/cosym/target.py
+++ b/src/dials/algorithms/symmetry/cosym/target.py
@@ -473,6 +473,20 @@ def compute_functional(self, x: np.ndarray) -> float:
f = 0.5 * elements.sum()
return f
+ def compute_functional_score_for_dimension_assessment(
+ self, x: np.ndarray, outlier_rejection: bool = True
+ ) -> float:
+ if not outlier_rejection:
+ return self.compute_functional(x)
+ x = x.reshape((self.dim, x.size // self.dim))
+ elements = np.square(self.rij_matrix - x.T @ x)
+ if self.wij_matrix is not None:
+ np.multiply(self.wij_matrix, elements, out=elements)
+
+ q1, q2, q3 = np.quantile(elements, (0.25, 0.5, 0.75))
+ inliers = elements[elements < q2 + (q3 - q1)]
+ return 0.5 * inliers.sum()
+
def compute_gradients_fd(self, x: np.ndarray, eps=1e-6) -> np.ndarray:
"""Compute the gradients at coordinates `x` using finite differences.
diff --git a/src/dials/command_line/correlation_matrix.py b/src/dials/command_line/correlation_matrix.py
index d19c78c65e0..cfe9714af5e 100644
--- a/src/dials/command_line/correlation_matrix.py
+++ b/src/dials/command_line/correlation_matrix.py
@@ -23,10 +23,7 @@
phil_scope = iotbx.phil.parse(
"""\
-include scope dials.algorithms.correlation.analysis.phil_scope
-
-seed = 42
- .type = int(value_min=0)
+include scope dials.algorithms.correlation.analysis.working_phil
output {
log = dials.correlation_matrix.log
diff --git a/src/dials/command_line/cosym.py b/src/dials/command_line/cosym.py
index bb333f8499a..76286f27689 100644
--- a/src/dials/command_line/cosym.py
+++ b/src/dials/command_line/cosym.py
@@ -75,9 +75,6 @@
.type = int(value_min=1)
.help = "The minimum number of reflections per experiment."
-seed = 230
- .type = int(value_min=0)
-
output {
suffix = "_reindexed"
.type = str
diff --git a/tests/algorithms/correlation/test_analysis.py b/tests/algorithms/correlation/test_analysis.py
index a79944a30bd..b1ea5786259 100644
--- a/tests/algorithms/correlation/test_analysis.py
+++ b/tests/algorithms/correlation/test_analysis.py
@@ -75,7 +75,7 @@ def test_filtered_corr_mat(proteinase_k, run_in_tmp_path):
matrices.output_json()
assert pathlib.Path("dials.correlation_matrix.json").is_file()
- expected_ids = [[1, 3], [0, 1, 3]]
+ expected_ids = [[0, 3], [0, 1, 3]]
# Check main algorithm correct with filtering
for i, j in zip(matrices.correlation_clusters, expected_ids):
From b0c8ae111e85387ba995cb07834f28ed9331b56c Mon Sep 17 00:00:00 2001
From: Nicholas Devenish
Date: Thu, 10 Oct 2024 13:26:23 +0100
Subject: [PATCH 16/16] Work around micromamba 2.0 memory issue (#2768)
conda-forge:: prefix on package specification was causing redownload and
reparsing for every dependency.
See https://github.com/mamba-org/mamba/issues/3393
---
.conda-envs/linux.txt | 114 +++++++++++++++++++--------------------
.conda-envs/macos.txt | 116 ++++++++++++++++++++--------------------
.conda-envs/windows.txt | 112 +++++++++++++++++++-------------------
installer/bootstrap.py | 2 +-
newsfragments/2768.misc | 2 +
5 files changed, 174 insertions(+), 172 deletions(-)
create mode 100644 newsfragments/2768.misc
diff --git a/.conda-envs/linux.txt b/.conda-envs/linux.txt
index 5328ec82c5f..7e4e0543d7e 100644
--- a/.conda-envs/linux.txt
+++ b/.conda-envs/linux.txt
@@ -1,57 +1,57 @@
-conda-forge::alabaster
-conda-forge::biopython
-conda-forge::bzip2
-conda-forge::c-compiler
-conda-forge::colorlog
-conda-forge::conda
-conda-forge::cxx-compiler
-conda-forge::dials-data>=2.4.72
-conda-forge::docutils
-conda-forge::eigen
-conda-forge::future
-conda-forge::gemmi>=0.6.5
-conda-forge::h5py>=3.1.0
-conda-forge::hdf5
-conda-forge::hdf5plugin
-conda-forge::iota
-conda-forge::jinja2
-conda-forge::libboost-devel
-conda-forge::libboost-python-devel
-conda-forge::libglu
-conda-forge::matplotlib-base>=3.0.2
-conda-forge::mesa-libgl-devel-cos7-x86_64
-conda-forge::mrcfile
-conda-forge::msgpack-cxx
-conda-forge::msgpack-python
-conda-forge::natsort
-conda-forge::numpy>=1.21.5,<2
-conda-forge::nxmx
-conda-forge::orderedset
-conda-forge::pandas
-conda-forge::pillow>=5.4.1
-conda-forge::pint
-conda-forge::pip
-conda-forge::psutil
-conda-forge::pybind11
-conda-forge::pyopengl
-conda-forge::pyrtf
-conda-forge::pytest
-conda-forge::pytest-forked
-conda-forge::pytest-mock
-conda-forge::pytest-xdist
-conda-forge::python-dateutil>=2.7.0
-conda-forge::reportlab
-conda-forge::requests
-conda-forge::scikit-learn
-conda-forge::scipy
-conda-forge::scons
-conda-forge::setuptools
-conda-forge::six
-conda-forge::sphinx>=4
-conda-forge::sqlite
-conda-forge::tabulate
-conda-forge::tqdm
-conda-forge::urllib3
-conda-forge::wxpython>=4.2.0
-conda-forge::xz
-conda-forge::zlib
+alabaster
+biopython
+bzip2
+c-compiler
+colorlog
+conda
+cxx-compiler
+dials-data>=2.4.72
+docutils
+eigen
+future
+gemmi>=0.6.5
+h5py>=3.1.0
+hdf5
+hdf5plugin
+iota
+jinja2
+libboost-devel
+libboost-python-devel
+libglu
+matplotlib-base>=3.0.2
+mesa-libgl-devel-cos7-x86_64
+mrcfile
+msgpack-cxx
+msgpack-python
+natsort
+numpy>=1.21.5,<2
+nxmx
+orderedset
+pandas
+pillow>=5.4.1
+pint
+pip
+psutil
+pybind11
+pyopengl
+pyrtf
+pytest
+pytest-forked
+pytest-mock
+pytest-xdist
+python-dateutil>=2.7.0
+reportlab
+requests
+scikit-learn
+scipy
+scons
+setuptools
+six
+sphinx>=4
+sqlite
+tabulate
+tqdm
+urllib3
+wxpython>=4.2.0
+xz
+zlib
diff --git a/.conda-envs/macos.txt b/.conda-envs/macos.txt
index 4758f05435c..2a26cb13b95 100644
--- a/.conda-envs/macos.txt
+++ b/.conda-envs/macos.txt
@@ -1,58 +1,58 @@
-conda-forge::alabaster
-conda-forge::biopython
-conda-forge::bzip2
-conda-forge::c-compiler
-conda-forge::colorlog
-conda-forge::conda
-conda-forge::cxx-compiler
-conda-forge::dials-data>=2.4.72
-conda-forge::docutils
-conda-forge::eigen
-conda-forge::future
-conda-forge::gemmi>=0.6.5
-conda-forge::h5py>=3.1.0
-conda-forge::hdf5
-conda-forge::hdf5plugin
-conda-forge::iota
-conda-forge::jinja2
-conda-forge::libboost-devel
-conda-forge::libboost-python-devel
-conda-forge::libcxx
-conda-forge::matplotlib-base>=3.0.2
-conda-forge::mrcfile
-conda-forge::msgpack-cxx
-conda-forge::msgpack-python
-conda-forge::natsort
-conda-forge::numpy>=1.21.5,<2
-conda-forge::nxmx
-conda-forge::orderedset
-conda-forge::pandas
-conda-forge::pillow>=5.4.1
-conda-forge::pint
-conda-forge::pip
-conda-forge::psutil
-conda-forge::pthread-stubs
-conda-forge::pybind11
-conda-forge::pyopengl
-conda-forge::pyrtf
-conda-forge::pytest
-conda-forge::pytest-forked
-conda-forge::pytest-mock
-conda-forge::pytest-xdist
-conda-forge::python-dateutil>=2.7.0
-conda-forge::python.app
-conda-forge::reportlab
-conda-forge::requests
-conda-forge::scikit-learn
-conda-forge::scipy
-conda-forge::scons
-conda-forge::setuptools
-conda-forge::six
-conda-forge::sphinx>=4
-conda-forge::sqlite
-conda-forge::tabulate
-conda-forge::tqdm
-conda-forge::urllib3
-conda-forge::wxpython>=4.2.0=*_5
-conda-forge::xz
-conda-forge::zlib
+alabaster
+biopython
+bzip2
+c-compiler
+colorlog
+conda
+cxx-compiler
+dials-data>=2.4.72
+docutils
+eigen
+future
+gemmi>=0.6.5
+h5py>=3.1.0
+hdf5
+hdf5plugin
+iota
+jinja2
+libboost-devel
+libboost-python-devel
+libcxx
+matplotlib-base>=3.0.2
+mrcfile
+msgpack-cxx
+msgpack-python
+natsort
+numpy>=1.21.5,<2
+nxmx
+orderedset
+pandas
+pillow>=5.4.1
+pint
+pip
+psutil
+pthread-stubs
+pybind11
+pyopengl
+pyrtf
+pytest
+pytest-forked
+pytest-mock
+pytest-xdist
+python-dateutil>=2.7.0
+python.app
+reportlab
+requests
+scikit-learn
+scipy
+scons
+setuptools
+six
+sphinx>=4
+sqlite
+tabulate
+tqdm
+urllib3
+wxpython>=4.2.0=*_5
+xz
+zlib
diff --git a/.conda-envs/windows.txt b/.conda-envs/windows.txt
index 374071d43a1..9a545bfb837 100644
--- a/.conda-envs/windows.txt
+++ b/.conda-envs/windows.txt
@@ -1,56 +1,56 @@
-conda-forge::alabaster
-conda-forge::biopython
-conda-forge::bzip2
-conda-forge::c-compiler
-conda-forge::colorlog
-conda-forge::conda
-conda-forge::cxx-compiler
-conda-forge::dials-data>=2.4.72
-conda-forge::docutils
-conda-forge::eigen
-conda-forge::future
-conda-forge::gemmi>=0.6.5
-conda-forge::h5py>=3.1.0
-conda-forge::hdf5
-conda-forge::hdf5plugin
-conda-forge::iota
-conda-forge::jinja2
-conda-forge::libboost-devel
-conda-forge::libboost-python-devel
-conda-forge::matplotlib-base>=3.0.2
-conda-forge::mrcfile
-conda-forge::msgpack-cxx
-conda-forge::msgpack-python
-conda-forge::natsort
-conda-forge::numpy>=1.21.5,<2
-conda-forge::nxmx
-conda-forge::orderedset
-conda-forge::pandas
-conda-forge::pillow>=5.4.1
-conda-forge::pint
-conda-forge::pip
-conda-forge::psutil
-conda-forge::pybind11
-conda-forge::pyopengl
-conda-forge::pyrtf
-conda-forge::pytest
-conda-forge::pytest-forked
-conda-forge::pytest-mock
-conda-forge::pytest-nunit
-conda-forge::pytest-xdist
-conda-forge::python-dateutil>=2.7.0
-conda-forge::reportlab
-conda-forge::requests
-conda-forge::scikit-learn
-conda-forge::scipy
-conda-forge::scons
-conda-forge::setuptools
-conda-forge::six
-conda-forge::sphinx>=4
-conda-forge::sqlite
-conda-forge::tabulate
-conda-forge::tqdm
-conda-forge::urllib3
-conda-forge::wxpython>=4.2.2
-conda-forge::xz
-conda-forge::zlib
+alabaster
+biopython
+bzip2
+c-compiler
+colorlog
+conda
+cxx-compiler
+dials-data>=2.4.72
+docutils
+eigen
+future
+gemmi>=0.6.5
+h5py>=3.1.0
+hdf5
+hdf5plugin
+iota
+jinja2
+libboost-devel
+libboost-python-devel
+matplotlib-base>=3.0.2
+mrcfile
+msgpack-cxx
+msgpack-python
+natsort
+numpy>=1.21.5,<2
+nxmx
+orderedset
+pandas
+pillow>=5.4.1
+pint
+pip
+psutil
+pybind11
+pyopengl
+pyrtf
+pytest
+pytest-forked
+pytest-mock
+pytest-nunit
+pytest-xdist
+python-dateutil>=2.7.0
+reportlab
+requests
+scikit-learn
+scipy
+scons
+setuptools
+six
+sphinx>=4
+sqlite
+tabulate
+tqdm
+urllib3
+wxpython>=4.2.2
+xz
+zlib
diff --git a/installer/bootstrap.py b/installer/bootstrap.py
index 287779e5198..55731bd25d5 100755
--- a/installer/bootstrap.py
+++ b/installer/bootstrap.py
@@ -110,7 +110,7 @@ def install_micromamba(python, cmake):
raise NotImplementedError(
"Unsupported platform %s / %s" % (os.name, sys.platform)
)
- url = "https://micromamba.snakepit.net/api/micromamba/{0}/1.5.10".format(conda_arch)
+ url = "https://micromamba.snakepit.net/api/micromamba/{0}/latest".format(conda_arch)
mamba_prefix = os.path.realpath("micromamba")
clean_env["MAMBA_ROOT_PREFIX"] = mamba_prefix
mamba = os.path.join(mamba_prefix, member.split("/")[-1])
diff --git a/newsfragments/2768.misc b/newsfragments/2768.misc
new file mode 100644
index 00000000000..921c477bf27
--- /dev/null
+++ b/newsfragments/2768.misc
@@ -0,0 +1,2 @@
+Remove cause of micromamba 2.0 memory issue.
+