__nbid__ = '0070'
+__author__ = 'Brian Merino <brian.merino@noirlab.edu>, Vinicius Placco <vinicius.placco@noirlab.edu>'
+__version__ = '20241216' # yyyymmdd; version datestamp of this notebook
+__keywords__ = ['GHOST','Gemini','stars','DRAGONS']
+
Gemini GHOST XX Oph reduction using DRAGONS Python API¶
+
Public archival data from ghost_tutorial - GS-ENG-GHOST-COM-3-915 (XX Oph)¶
adapted from https://dragons.readthedocs.io/projects/ghost-drtutorial/en/release-3.2.x/index.html¶
+
Note: This notebook may take more than an hour to run. It may take more or less time if you work with a dataset different from the one used in this tutorial. The total running time depends on the number and variety of files you want to reduce.¶
If you want to run this notebook on your local machine, ensure your DRAGONS calibration database is correctly set up. To do so, make sure the calibrations section of ./dragons/dragonsrc looks like this:¶
[calibs]
+databases = ~/.dragons/dragons.db get store
+
+Table of contents¶
-
+
- Goals +
- Summary +
- Disclaimers and attribution +
- Imports and setup +
- About the dataset +
- Prepare the working directory +
- Downloading data for reduction +
- Create File Lists +
- Set up the DRAGONS logger +
- Create update_list() and reduce_func() +
- Select and reduce biases +
- Select and reduce slit biases +
- Select and reduce science biases +
- Select and reduce flat/arc biases +
- Select and reduce master and slit flats +
- Select and reduce flats +
- Select and reduce arcs +
- Select and reduce slit viewer data +
- Select and reduce arcs +
- Select and reduce spectroscopic standard +
- Select and reduce standard data +
- Select and reduce science data +
- Select and reduce science slit-viewer data +
- Select and reduce science data +
- Plot reduced spectra +
- Output 1D spectra +
- Save plots of reduced spectra +
- Make reduced spectra IRAF compatible +
Goals¶
Showcase how to reduce GHOST spectroscopy data using the Gemini DRAGONS package on the Data Lab science platform using a custom DRAGONS kernel "DRAGONS-3.2.2 (DL,Py3.10.14)"
. The steps include downloading data from the Gemini archive, setting up the DRAGONS calibration service, processing biases, flats, and arcs, creating master flats and slit-flats, reducing the standards and science data, and finally creating the final reduced spectra for GHOST's red and blue arms.
Summary¶
DRAGONS is a Python-based astronomical data reduction platform written by the Gemini Science User Support Department. It can currently be used to reduce imaging data from Gemini instruments GMOS, NIRI, Flamingos 2, GSAOI, and GNIRS, as well as spectroscopic data taken with GHOST and GMOS in longslit mode. Linked here is a general list of guides, manuals, and tutorials about the use of DRAGONS.
+The DRAGONS kernel has been made available in the Data Lab environment, allowing users to access the routines without being dependent on installing the software on their local machines. It is important to note that when a DRAGON command is executed, the output will be displayed inside the cell. Make sure to scroll through the output to ensure no errors are missed.
+In this notebook, we present an example of a DRAGONS Jupyter notebook that works in the Data Lab environment to reduce Gemini South GHOST blue:2x2 and red:2x2 spectroscopy data fully. This notebook will not present all of the details of the many options available to adjust or optimize the DRAGONS GHOST data reduction process; rather, it will just show one example of a standard reduction of a GHOST spectroscopic dataset.
+The data used in this notebook example is GHOST blue:2x2 and red:2x2 spectroscopy data from the Gemini archive of the star XX Oph from the GHOST commissioning run. Because the data used is from a commissioning run, there is no program information available, but you can find more information about GHOST's red and blue IFUs on the GHOST instrument page.
+ +Disclaimer & attribution¶
Disclaimers¶
Note that using the Astro Data Lab constitutes your agreement with our minimal Disclaimers.
+Acknowledgments¶
If you use Astro Data Lab in your published research, please include the text in your paper's Acknowledgments section:
+This research uses services or data provided by the Astro Data Lab, which is part of the Community Science and Data Center (CSDC) Program of NSF NOIRLab. NOIRLab is operated by the Association of Universities for Research in Astronomy (AURA), Inc. under a cooperative agreement with the U.S. National Science Foundation.
+If you use SPARCL jointly with the Astro Data Lab platform (via JupyterLab, command-line, or web interface) in your published research, please include this text below in your paper's Acknowledgments section:
+This research uses services or data provided by the SPectra Analysis and Retrievable Catalog Lab (SPARCL) and the Astro Data Lab, which are both part of the Community Science and Data Center (CSDC) Program of NSF NOIRLab. NOIRLab is operated by the Association of Universities for Research in Astronomy (AURA), Inc. under a cooperative agreement with the U.S. National Science Foundation.
+In either case please cite the following papers:
+-
+
Data Lab concept paper: Fitzpatrick et al., "The NOAO Data Laboratory: a conceptual overview", SPIE, 9149, 2014, https://doi.org/10.1117/12.2057445
+
+Astro Data Lab overview: Nikutta et al., "Data Lab - A Community Science Platform", Astronomy and Computing, 33, 2020, https://doi.org/10.1016/j.ascom.2020.100411
+
+
If you are referring to the Data Lab JupyterLab / Jupyter Notebooks, cite:
+-
+
- Juneau et al., "Jupyter-Enabled Astrophysical Analysis Using Data-Proximate Computing Platforms", CiSE, 23, 15, 2021, https://doi.org/10.1109/MCSE.2021.3057097 +
If publishing in a AAS journal, also add the keyword: \facility{Astro Data Lab}
And if you are using SPARCL, please also add \software{SPARCL}
and cite:
-
+
- Juneau et al., "SPARCL: SPectra Analysis and Retrievable Catalog Lab", Conference Proceedings for ADASS XXXIII, 2024 +https://doi.org/10.48550/arXiv.2401.05576 +
The NOIRLab Library maintains lists of proper acknowledgments to use when publishing papers using the Lab's facilities, data, or services.
+For this notebook specifically, please acknowledge:
+-
+
DRAGONS publication: Labrie et al., DRAGONS - Data Reduction for Astronomy from Gemini Observatory North and South, ASPC, 523, 321L
+
+- + +
Imports and setup¶
+import warnings
+import glob
+import os
+import numpy as np
+import astrodata
+import shutil
+import matplotlib.pyplot as plt
+
+from gempy.utils import logutils
+from gempy.adlibrary import dataselect
+
+from recipe_system import cal_service
+from recipe_system.reduction.coreReduce import Reduce
+
+from astropy.utils.exceptions import AstropyWarning
+warnings.simplefilter('ignore', category=AstropyWarning)
+warnings.filterwarnings("ignore")
+
Establish function to check and delete various directories or files.¶
+def check_and_delete(path):
+ #Check to see if path already exists.
+ if os.path.exists(path):
+ #If path leads to a directory.
+ if os.path.isdir(path):
+ #Remove directory
+ shutil.rmtree(path)
+
+ #If path leads to a file.
+ elif os.path.isfile(path):
+ #Remove file
+ os.remove(path)
+
About the dataset¶
The GHOST data used for this tutorial is of the star XX Oph. IFU-1 was used to obtain standard resolution of the star. The data was obtained during the commissioning run.
+The table below contains a summary of the dataset:
+| Observation Type | File name(s) | IFU, Binning, and Read Mode |
+| :--- | :--- | :---: |
+| Science | S20230416S0079 | blue:2x2, slow; red:2x2, medium |
+| Science bias | S20230417S0011-015 | |
+| Science flats | S20230416S0047 | 1x1; blue:slow; red:medium |
+| Science Arcs | S20230416S0049-51 | 1x1; blue:slow; red:medium |
+| Flat biases
Arc Biases | S20230417S0036-40 | 1x1; blue:slow; red:medium |
+| Standard (CD -32 9927) | S20230416S0073 | blue:2x2, slow; red:2x2, medium |
+| Standard biases
Standard flats
Standard arc
Standard flat biases
Standard arc biases | Use science calibrations |
+| BPMs | bpm_20220601_ghost_blue_11_full_4amp.fits
bpm_20220601_ghost_red_11_full_4amp.fits | |
Prepare the working directory¶
If you have any intermediate files that were created from running this code in the past, you will need to remove them from your working directory. The cell below defines a clean-up function that will remove all the fits files from your working directory. This function will be called again at the end of the tutorial, leaving you with only the final product.
+By default, this function will delete all the log, list, and fits files in the working directory. If there are files that have been previously reduced that you would like to keep, set save_reduced=1
when calling the function.
def clean_up(save_reduced=0):
+ #Does the calibrations directory already exist?
+ check_and_delete('calibrations')
+
+ #Remove existing log and list files.
+ work_dir_path = os.getcwd()
+ work_dir = os.listdir(work_dir_path)
+
+ for item in work_dir:
+ if item.endswith(".log") or item.endswith(".list"):
+ check_and_delete(os.path.join(work_dir_path,item))
+
+ #Next, we will remove all the existing fits files, except for the previously reduced files, depending on what you set save_reduced to.
+ if save_reduced:
+ all_files = glob.glob('*.fits')
+ save = dataselect.select_data(all_files, [], ['PROCESSED'])
+
+ for s in save:
+ check_and_delete(os.path.join(work_dir_path,s))
+
+ if os.path.exists(os.path.join('reduced')):
+ remain_files = os.listdir(work_dir_path)
+
+ for item in remain_files:
+ if os.path.splitext(item)[1] in ('.dat','.pdf','.fits','.png'):
+ #Check if the file already exists in reduced/
+ #If it does, delete it and replace it with the new copy
+ if os.path.exists(os.path.join(work_dir_path,'reduced',item)):
+ check_and_delete(os.path.join(work_dir_path,'reduced',item))
+ shutil.move(item,os.path.join(work_dir_path,'reduced'))
+
+ else:
+ shutil.move(item,os.path.join(work_dir_path,'reduced'))
+
+ #Create reduced/ directory and move reduced files
+ else:
+ os.mkdir(os.path.join(work_dir_path,'reduced'))
+ remain_files = os.listdir(work_dir_path)
+
+ for item in remain_files:
+ if os.path.splitext(item)[1] in ('.dat','.pdf','.fits','.png'):
+ shutil.move(item,os.path.join(work_dir_path,'reduced'))
+
+ else:
+ all_files = glob.glob('*.fits')
+ for a in all_files:
+ check_and_delete(os.path.join(work_dir_path,a))
+
Create a directory for raw files¶
This tutorial will create a large number of intermediate files that will be temporarily stored in the working directory. To ensure none of the original data is lost, we will create a directory called raw to store the preliminary data safely.
+check_and_delete('raw')
+os.mkdir('raw')
+
Downloading the data¶
Download the spectroscopic and calibration data from the Gemini archive to the current working directory. This step only needs to be executed once.
+If you run this notebook for the first time and need to download the dataset, set the variable "download=True". The notebook will not redownload the dataset if it is set to False. This will become particularly useful if you run the notebooks more than once.
+%%bash
+
+# Create a list of FITS files to be downloaded.
+echo "\
+https://archive.gemini.edu/file/S20230416S0047.fits
+https://archive.gemini.edu/file/S20230416S0049.fits
+https://archive.gemini.edu/file/S20230416S0050.fits
+https://archive.gemini.edu/file/S20230416S0051.fits
+https://archive.gemini.edu/file/S20230416S0073.fits
+https://archive.gemini.edu/file/S20230416S0079.fits
+https://archive.gemini.edu/file/S20230417S0011.fits
+https://archive.gemini.edu/file/S20230417S0012.fits
+https://archive.gemini.edu/file/S20230417S0013.fits
+https://archive.gemini.edu/file/S20230417S0014.fits
+https://archive.gemini.edu/file/S20230417S0015.fits
+https://archive.gemini.edu/file/S20230417S0036.fits
+https://archive.gemini.edu/file/S20230417S0037.fits
+https://archive.gemini.edu/file/S20230417S0038.fits
+https://archive.gemini.edu/file/S20230417S0039.fits
+https://archive.gemini.edu/file/S20230417S0040.fits
+https://archive.gemini.edu/file/bpm_20220601_ghost_blue_11_full_4amp.fits
+https://archive.gemini.edu/file/bpm_20220601_ghost_red_11_full_4amp.fits\
+" > ghost.list
+
%%bash
+
+download="True"
+
+if [ $download == "True" ]; then
+ wget --no-check-certificate -N -q -P './raw' -i ghost.list
+
+else
+ echo "Skipping download. To download the data set used in this notebook, set download=True."
+fi
+
Create file lists¶
This data set contains science and calibration frames. For some programs, it could have different observed targets and exposure times depending on how you organize your raw data. The DRAGONS data reduction pipeline does not organize the data for you. You have to do it. DRAGONS provides tools to help you with that.
+The first step is to create lists that will be used in the data reduction process.
+all_files = glob.glob('raw/S2023*.fits')
+all_files.append(glob.glob('raw/bpm*.fits')[0])
+all_files.append(glob.glob('raw/bpm*.fits')[1])
+all_files.sort()
+
Setting up the DRAGONS logger¶
DRAGONS comes with a local calibration manager that uses the same calibration association rules as the Gemini Observatory Archive. This allows reduce to make requests to a local light-weight database for matching processed calibrations when needed to reduce a dataset.
+This tells the system where to put the calibration database. This database will keep track of the processed calibrations we will send to it.
+logutils.config(file_name='ghost.log')
+caldb = cal_service.set_local_database()
+caldb.init("w")
+
Add the Bad Pixel Masks to the calibration database¶
+caldb.add_cal(glob.glob('raw/bpm*.fits')[0])
+caldb.add_cal(glob.glob('raw/bpm*.fits')[1])
+
update_list() and reduce_func()¶
This notebook will require updating the list of files in your working directory and calling the reduce command several times. To reduce the repetitive text, we have created two functions that will cut down the number of lines included in this notebook.
+def update_list():
+ #Create a new file list that contains the intermediate files
+ #identify all of the files in the working directory
+ intermediate = os.listdir()
+ new_all_files = []
+
+ #Since os.listdir() returns all files in the working directory
+ #this loop will pick out only the fits files and add them to a list.
+ for i in intermediate:
+ if os.path.splitext(i)[1] == '.fits':
+ new_all_files.append(i)
+
+ print('%i files in the list.'%len(new_all_files))
+ return new_all_files
+
def reduce_func(files_list,uparms=None,recipename=None):
+ #Use DRAGONS' reduce function to reduce the provided list of files.
+ #By default, this function will use the default settings for reduce().
+ #uparms: This is a list of tuples with the primitive name and parameter in the first element and the value in the second e.g. [('stackFrames:operation', 'median')].
+ #recipename: The name of the recipe to use e.g. 'makeIRAFCompatible'.
+ reduce = Reduce()
+ reduce.files.extend(files_list)
+
+ if uparms is not None:
+ reduce.uparms = [uparms]
+
+ if recipename is not None:
+ reduce.recipename = recipename
+
+ reduce.runr()
+
Select and reduce biases¶
+biasbundles = dataselect.select_data(all_files, ['BIAS'], [])
+print(biasbundles)
+
Use reduce_func() to reduce the biases¶
When this cell is done running, three files will be created for each bias (science, flat, and arc). They will have the suffix *_blue001.fits, *_red001.fits, and *_slit.fits.
+reduce_func(biasbundles)
+
Update list¶
DRAGONS' reduce() function creates a lot of intermediate files that are stored in the working directory. Before calling it again, we first need to update our list of files using update_list().
+new_all_files = update_list()
+
Now use dataselect to choose the slit biases.¶
+biasslit = dataselect.select_data(new_all_files, ['BIAS','SLIT'])
+print(biasslit)
+
Reduce the slit biases.¶
When done running, a new file will be created called S20230417S0040_slit_bias.fits.
+reduce_func(biasslit)
+
Update the list of files.¶
+new_all_files2 = update_list()
+
Select and reduce science biases.¶
+Use dataselect to choose the red science biases.¶
+expression = "binning==\'2x2\'"
+parsed_expr = dataselect.expr_parser(expression)
+biasredsci = dataselect.select_data(new_all_files2, ['BIAS', 'RED'], [], parsed_expr)
+
Reduce the red science biases.¶
Once done running, a new file called S20230417S0012_red001_bias.fits will exist in the working directory.
+reduce_func(biasredsci)
+
Select the blue science biases.¶
+#expression = 'binning==\'2x2\''
+expression = "binning=='2x2'"
+parsed_expr = dataselect.expr_parser(expression)
+biasbluesci = dataselect.select_data(new_all_files2, ['BIAS','BLUE'], [], parsed_expr)
+
Reduce the blue science biases.¶
A single file called S20230417S0011_blue001_bias.fits will be created after running this cell.
+reduce_func(biasbluesci)
+
Select the flat/arc biases and reduce them.¶
+Select the red flat/arc biases and reduce them.¶
+expression = "binning=='1x1'"
+parsed_expr = dataselect.expr_parser(expression)
+biasredflatarc = dataselect.select_data(new_all_files2, ['BIAS','RED'], [], parsed_expr)
+
Running the following cell will create a new file called S20230417S0038_red001_bias.fits.
+reduce_func(biasredflatarc)
+
Select the blue flat/arc biases and reduce them.¶
+expression = "binning=='1x1'"
+parsed_expr = dataselect.expr_parser(expression)
+biasblueflatarc = dataselect.select_data(new_all_files2, ['BIAS','BLUE'], [], parsed_expr)
+
Running the following cell will create a new file called S20230417S0039_blue001_bias.fits.
+reduce_func(biasblueflatarc)
+
Clean-up¶
GHOST reduction creates many, often big, files in the working directory. It is recommended to clean up between each reduction phase. If you want to save the intermediate files, move them (mv) somewhere else. In this tutorial, we will simply delete them.
+%%bash
+
+rm *fits
+
flatbundles = dataselect.select_data(all_files, ['FLAT'], [])
+
Running this cell will generate 11 files from the science flat S20230416S0047.fits. Five will have the suffix _blue00.fits, five will have the suffix _red00.fits, and the remainder will have the suffix _slit.fits.
+reduce_func(flatbundles)
+
Update the list of files.¶
+new_all_files3 = update_list()
+
Select and reduce the flats.¶
+Select the slit-flats and reduce them.¶
+slitflat = dataselect.select_data(new_all_files3, ['SLITFLAT'], [])
+
The following cell will create a single file called S20230416S0047_slit_slitflat.fits.
+Note: This cell will give an ERROR regarding the inputs having different numbers of SCI extensions. This is a known issue with DRAGONS and will show up regardless of the data you provide. This ERROR can be safely ignored.
+reduce_func(slitflat)
+
Select and reduce the red flats.¶
+flatred = dataselect.select_data(new_all_files3, ['FLAT','RED'], [])
+
Running the next cell will also create a single file called S20230416S0047_red002_flat.fits.
+reduce_func(flatred)
+
Select and reduce the blue flats.¶
+flatblue = dataselect.select_data(new_all_files3, ['FLAT','BLUE'], [])
+
The following cell will create a file called S20230416S0047_blue001_flat.fits.
+reduce_func(flatblue)
+
Clean-up¶
+%%bash
+
+rm *fits
+
arcbundles = dataselect.select_data(all_files, ['ARC'], [])
+
Running the next cell will create 9 files, three for each of the science arcs. Three will have the suffix _blue001.fits, three will have the suffix _red001.fits, and the remaining three will have the suffix _slit.fits.
+reduce_func(arcbundles)
+
Update the list of files.¶
+new_all_files4 = update_list()
+
Select and reduce the slit-viewer data.¶
+arcslit_1 = dataselect.select_data(new_all_files4, ['ARC','SLIT'], [])
+
#The original tutorial orders their lists numerically by default, while this version does not.
+#A few lines of code have been added here to manually order the list.
+arcslit_1.sort()
+arcslit_2 = [arcslit_1[0]]
+print(arcslit_2)
+
The following cell will return a single file called S20230416S0049_slit_slit.fits.
+reduce_func(arcslit_2)
+
Select and reduce the arcs.¶
+Select and reduce the red arcs.¶
+arcred = dataselect.select_data(new_all_files4, ['ARC','RED'], [])
+arcred.sort()
+arcred
+
The following cell will create a file called S20230416S0049_red001_arc.fits.
+reduce_func(arcred)
+
Select and reduce the blue arcs.¶
+arcblue = dataselect.select_data(new_all_files4, ['ARC','BLUE'], [])
+arcblue.sort()
+arcblue
+
The following cell will also create a single file called S20230416S0049_blue001_arc.fits.
+reduce_func(arcblue)
+
Clean-up¶
+%%bash
+
+rm *fits
+
expression = "object=='CD -32 9927'"
+parsed_expr = dataselect.expr_parser(expression)
+stdbundles = dataselect.select_data(all_files, [], [], parsed_expr)
+stdbundles.sort()
+stdbundles
+
The next cell will create 5 files starting with the same name as the standard. One will have the suffix _blue001.fits, three will have the suffix _red00*.fits, and the final file will have the suffix _slit.fits.
+reduce_func(stdbundles)
+
Update the list of files.¶
+new_all_files5 = update_list()
+
Select and reduce the standard data.¶
+Select the slit-viewer standard data and reduce them.¶
+stdslit = dataselect.select_data(new_all_files5, ['SLIT'], [])
+stdslit.sort()
+stdslit
+
Unlike the previous cells, the following cell will create four new fits files, and a pdf. The pdf will have the suffix _slit_slitflux. One fits file will have the suffix _slit_blue001_slit.fits and the remaining three will have the suffix _slit_red00*_slit.fits.
+Note: This cell will give an ERROR regarding the inputs having different numbers of SCI extensions. This is a known issue with DRAGONS and will show up regardless of the data you provide. This ERROR can be safely ignored.
+reduce_func(stdslit)
+
Select the red standard star data and reduce it.¶
+stdred = dataselect.select_data(new_all_files5, ['RED'], [])
+stdred.sort()
+stdred
+
Running the following cell will generate a file called S20230416S0073_red001_standard.fits.
+reduce_func(stdred,uparms=('scaleCountsToReference:tolerance',1),recipename='reduceStandard')
+
Select the blue standard star data and reduce it.¶
+stdblue = dataselect.select_data(new_all_files5, ['BLUE'], [])
+
stdblue.sort()
+stdblue
+
The following cell will also produce a single file called S20230416S0073_blue001_standard.fits.
+reduce_func(stdblue,uparms=('scaleCountsToReference:tolerance',1),recipename='reduceStandard')
+
Clean-up¶
+%%bash
+
+rm *fits
+
expression = "object=='XX Oph'"
+parsed_expr = dataselect.expr_parser(expression)
+scibundles = dataselect.select_data(all_files, [], [],parsed_expr)
+
Running the next cell will create 5 files whose names will start with the name of the science file. One will have the suffix _blue001.fits, three will have the suffix _red00*.fits, and the final file will have the suffix _slit.fits.
+reduce_func(scibundles)
+
Update the list of files.¶
+new_all_files6 = update_list()
+
Select the slit-viewer science data and reduce it.¶
+scislit = dataselect.select_data(new_all_files6, ['SLIT'], [])
+
Like when reducing the science data, the following cell will create 5 files. One will have the suffix _slit_blue001_slit.fits, three will have the suffix _slit_red00*_slit.fits, and the last one will have the suffix _slit_slitflux.pdf.
+Note: This cell will give an ERROR regarding the inputs having different numbers of SCI extensions. This is a known issue with DRAGONS and will show up regardless of the data you provide. This ERROR can be safely ignored.
+reduce_func(scislit)
+
Update the list of files.¶
+new_all_files7 = update_list()
+
Select and reduce the science data.¶
+Select and Reduce the red science frames.¶
+expression = "object=='XX Oph'"
+parsed_expr = dataselect.expr_parser(expression)
+scired = dataselect.select_data(new_all_files7, ['RED'], [])
+scired = np.sort(scired)
+
Running the following cell will create 4 files. One will have the suffix _red001_dragons.fits, and the other three will have the suffix _red00*_calibrated.fits.
+reduce_func(scired)
+
Select and Reduce the blue science frames.¶
+expression = "object=='XX Oph'"
+parsed_expr = dataselect.expr_parser(expression)
+sciblue = dataselect.select_data(new_all_files7, ['BLUE'], [])
+sciblue = np.sort(sciblue)
+
Running the following cell will create two files. The first will have the suffix _blue001_calibrated.fits and the other will have the suffix _blue001_dragons.fits.
+reduce_func(sciblue)
+
Plot the reduced red and blue spectra¶
+#Display S20230416S0079_red001_dragons.fits and S20230416S0079_blue001_dragons.fits
+fig, (ax1, ax2) = plt.subplots(1, 2,figsize=(13,5))
+
+red_file = 'S20230416S0079_red001_dragons.fits'
+blue_file = 'S20230416S0079_blue001_dragons.fits'
+
+red_pf = astrodata.open(red_file)
+blue_pf = astrodata.open(blue_file)
+
+red_flux = red_pf[0].data
+blue_flux = blue_pf[0].data
+
+red_flux_array = np.array(red_flux)
+blue_flux_array = np.array(blue_flux)
+
+red_wave = red_pf[0].wcs(np.arange(red_flux.size)).astype(np.float32)
+blue_wave = blue_pf[0].wcs(np.arange(blue_flux.size)).astype(np.float32)
+
+# Convert the λ from nm to Å
+red_wave_array = np.array(red_wave*10)
+blue_wave_array = np.array(blue_wave*10)
+
+ax1.plot(blue_wave_array,blue_flux_array,lw=0.4)
+ax1.set_xlim(3450,5450)
+ax1.set_ylim(-0.05*10**(-12),0.2*10**(-12))
+ax1.set_xlabel('Wavelength [$\AA$]')
+ax1.set_ylabel('Flux [$W m^{-2} nm^{-1}$]')
+ax1.set_title(blue_file,size=11,fontweight='bold')
+
+ax2.plot(red_wave_array,red_flux_array,lw=0.4)
+ax2.set_xlim(5100,10700)
+ax2.set_ylim(-0.05*10**(-12),0.23*10**(-12))
+ax2.set_xlabel('Wavelength [$\AA$]')
+ax2.set_ylabel('Flux [$W m^{-2} nm^{-1}$]')
+ax2.set_title(red_file,size=11,fontweight='bold')
+
+plt.show()
+
Save 1D Spectra¶
If you would like to save the finished spectra as text files instead of fits files, use the write1DSpectra recipe as demonstrated in the following two cells.
+Running the first cell will create two files. The first will be called S20230416S0079_red001_dragons_001.dat and the second will be called S20230416S0079_red001_dragons_002.dat.
+Running the second cell will also create two files. The first will be called S20230416S0079_blue001_dragons_001.dat and the second will be called S20230416S0079_blue001_dragons_002.dat.
+red_to_1d = ['S20230416S0079_red001_dragons.fits']
+reduce_func(red_to_1d,recipename='write1DSpectra')
+
blue_to_1d = ['S20230416S0079_blue001_dragons.fits']
+reduce_func(blue_to_1d,recipename='write1DSpectra')
+
Save reduced spectra¶
If you want to save a PNG stamp plot of the reduced red and blue spectra, run the following cell. You also have the option to save the image in a different format, including SVG, eps, and PS, by replacing 'PNG' in the second to last line with your desired format. This code will save the red and blue spectra separately. One saved file will be called S20230416S0079_blue001_dragons.png, and the other will be called S20230416S0079_red001_dragons.png.
+# .- Author: David Herrera - June 2024
+# Create a list of all DRAGONS reduced fits files in the current directory
+ls_fits = 'ls -1 *{blue,red}00?*_dragons.fits > dragons_fits.list'
+os.system(ls_fits)
+# Saving the list of DRAGONS reduced fits in a list
+fits_list = 'dragons_fits.list'
+
+# Open the list of fits files
+with open (fits_list, "r") as files:
+ fnames_list = [line.strip() for line in files.readlines()]
+
+# Read each fits file name
+for fname in fnames_list:
+ # Determine if it is a red or a blue spectrum
+ file = str(fname.strip())
+ if '_red' in file:
+ band = 'red'
+ else:
+ band = 'blue'
+ # Open and read the data from each FITS file
+ ad = astrodata.open(file)
+ flux = ad[0].data
+ lam = ad[0].wcs(np.arange(flux.size)).astype(np.float32)
+
+ # Convert the λ from nm to Å
+ lambda_array = np.array(lam*10)
+ flux_array = np.array(flux)
+
+ # Define lambda ranges for each panel depending on the band
+ if band == 'red':
+ lambda_ranges = [(5370, 6330), (6270, 7230), (7170, 8130), (8070, 9030), (8970, 9930)]
+ else:
+ lambda_ranges = [(3790, 4110), (4090, 4410), (4390, 4710), (4690, 5010), (4990, 5310)]
+ # Create a figure and a set of 5 subplots
+ fig, axs = plt.subplots(len(lambda_ranges), 1, sharex=False, figsize=(10, 8))
+
+ # Plot data in each range
+ for i, (lam_min, lam_max) in enumerate(lambda_ranges):
+ # Filter data for the current range
+ mask = (lambda_array >= lam_min) & (lambda_array <= lam_max)
+ lambda_filtered = lambda_array[mask]
+ flux_filtered = flux_array[mask]
+
+ if len(lambda_filtered) > 0 and len(flux_filtered) > 0:
+ # Plot the data
+ axs[i].plot(lambda_filtered, flux_filtered, c=band, lw='.6')
+ # Calculate the flux median of the range of flux in current range
+ flux_median = np.median(flux_filtered)
+ # Set the x-limits
+ axs[i].set_xlim(lam_min, lam_max)
+ # Set the y-limits for this particular panel
+ ylim=(-0.25 * flux_median, 2.5 * flux_median)
+ axs[i].set_ylim(ylim)
+ axs[i].set
+ # Hide tick values in y
+ #axs[i].set_yticks([])
+ # Handling ticks
+ axs[i].minorticks_on()
+ if band == 'red':
+ axs[i].set_xticks(np.arange(lam_min+30,lam_max+30,step=150))
+ axs[i].set_xticks(np.arange(lam_min+80,lam_max,step=50), minor = True)
+ axs[i].tick_params(axis = 'y', which='major', labelsize = 8)
+ else:
+ # Handle the case where no data points are in the range
+ axs[i].text(0.5, 0.5, 'No data in this range', transform=axs[i].transAxes,
+ ha='center', va='center', color=band)
+
+ # Optionally, set y-label for each panel
+ #axs[i].set_yticks()
+ # Only y-label on the 3rd panel
+ if i == 2: axs[i].set_ylabel('Flux [$W m^{-2} nm^{-1}$]')
+
+ # Set x-axis label for the bottom plot
+ axs[-1].set_xlabel('λ(Å)')
+ # Set title for the whole plot
+ fig.suptitle(file)
+ # Adjust layout to remove gaps between subplots
+ plt.tight_layout()
+
+ # Show the plot
+ #plt.show()
+
+ # Save plot in a file (it can be a png, svg, eps, ps)
+ fig.savefig(file.strip('fits') + 'png', dpi='figure', format='png', metadata=None, bbox_inches=None, pad_inches=0.1)
+ plt.close()
+
Make IRAF compatible¶
This notebook's finished products conform to DRAGONS' fits standards which does not comply with what IRAF expects. If you would like the final reduced spectra to be compatible with IRAF, you can use the makeIRAFCompatible recipe as shown below. (Uncomment before running.)
+Running the first cell will create a file called S20230416S0079_red001_dragons_irafCompatible.fits.
+Running the second cell will create a file called S20230416S0079_blue001_dragons_irafCompatible.fits.
+# reduce_iraf = Reduce()
+# red_dragons_files = ['S20230416S0079_red001_dragons.fits']
+# reduce_iraf.files.extend(red_dragons_files)
+# reduce_iraf.recipename = 'makeIRAFCompatible'
+# reduce_iraf.runr()
+
# reduce_iraf = Reduce()
+# blue_dragons_files = ['S20230416S0079_blue001_dragons.fits']
+# reduce_iraf.files.extend(blue_dragons_files)
+# reduce_iraf.recipename = 'makeIRAFCompatible'
+# reduce_iraf.runr()
+
This notebook has only used DRAGONS' default options. If you would like all the individual exposures to be reduced seperately, you can look into the combineOrders() command.
+Optional: Clean up working directory. (uncomment before running)¶
+# clean_up(save_reduced=0)
+
+