Skip to content

Commit

Permalink
add notes on daily loading
Browse files Browse the repository at this point in the history
  • Loading branch information
weaverba137 committed Sep 17, 2024
1 parent 1a3c90a commit a195c06
Show file tree
Hide file tree
Showing 2 changed files with 25 additions and 13 deletions.
12 changes: 12 additions & 0 deletions doc/dynamic.rst
Original file line number Diff line number Diff line change
Expand Up @@ -27,6 +27,9 @@ Daily Database Loading Requirements
Possible Load Scenario
----------------------

For individual tiles
~~~~~~~~~~~~~~~~~~~~

1. Read ``tiles-daily``, find changes. Update ``daily.tile``.
2. Find corresponding exposures. Update ``daily.exposure``, ``daily.frame``.
3. Find any *new* fiberassign files and obtain the list of new potentials (because potentials include observed targets).
Expand All @@ -35,6 +38,15 @@ Possible Load Scenario
6. Update ``daily.fiberassign``, ``daily.potential``.
7. Read corresponding ``tiles/cumulative`` redshift files. Update ``daily.ztile``.

For initial load
~~~~~~~~~~~~~~~~

1. Load the exposures and tiles files.
2. Compute "monolithic" redshift catalogs. These are needed as input to photometry.
3. Create "monolithic" photometry and target catalogs.
4. The load procedure then resembles loading a static specprod, just without healpix redshifts.
5. Update the "primary" columns and any q3c indexes.

Automated Extraction of Targeting and Photometry
------------------------------------------------

Expand Down
26 changes: 13 additions & 13 deletions py/specprodDB/load.py
Original file line number Diff line number Diff line change
Expand Up @@ -1971,19 +1971,7 @@ def main():
'chunksize': chunksize,
'maxrows': maxrows
}],
'redshift': [{'filepaths': zpix_file,
'tcls': Zpix,
'hdu': 'ZCATALOG',
'preload': _survey_program,
'expand': {'COEFF': ('coeff_0', 'coeff_1', 'coeff_2', 'coeff_3', 'coeff_4',
'coeff_5', 'coeff_6', 'coeff_7', 'coeff_8', 'coeff_9',)},
'convert': {'id': lambda x: x[0] << 64 | x[1]},
# 'rowfilter': lambda x: (x['TARGETID'] > 0) & ((x['TARGETID'] & 2**59) == 0),
'rowfilter': no_sky,
'chunksize': chunksize,
'maxrows': maxrows
},
{'filepaths': ztile_file,
'redshift': [{'filepaths': ztile_file,
'tcls': Ztile,
'hdu': 'ZCATALOG',
'preload': _survey_program,
Expand Down Expand Up @@ -2017,6 +2005,18 @@ def main():
'chunksize': chunksize,
'maxrows': maxrows
}]}
if specprod != 'daily':
loaders['redshift'].append({'filepaths': zpix_file,
'tcls': Zpix,
'hdu': 'ZCATALOG',
'preload': _survey_program,
'expand': {'COEFF': ('coeff_0', 'coeff_1', 'coeff_2', 'coeff_3', 'coeff_4',
'coeff_5', 'coeff_6', 'coeff_7', 'coeff_8', 'coeff_9',)},
'convert': {'id': lambda x: x[0] << 64 | x[1]},
# 'rowfilter': lambda x: (x['TARGETID'] > 0) & ((x['TARGETID'] & 2**59) == 0),
'rowfilter': no_sky,
'chunksize': chunksize,
'maxrows': maxrows})
try:
loader = loaders[options.load]
except KeyError:
Expand Down

0 comments on commit a195c06

Please sign in to comment.