Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found

Target

Select target project
  • DM/dm-docs
  • hammonds/dm-docs
  • hparraga/dm-docs
3 results
Show changes
Showing
with 1668 additions and 0 deletions
File added
File added
TOP = ..
SUBDIRS = sphinx
include $(TOP)/tools/make/RULES_DM
Release 1.1 (03/10/2017)
=============================
- Introduced integration with Beamline Scheduling System:
- New commands:
- list-runs
- list-proposals
- get-proposal
- Modified add-experiment command to automatically add users associated
with a given beamline proposal
- Added the following functionality for managing DAQs:
- Max. run time specification causes DAQ to be stopped automatically
- Destination directory specification causes files to be uploaded into
a specific directory relative to experiment root path
- Upload data/target directory on exit specify upload of the given
directory automatically after DAQ completes
- Added the following functionality for managing uploads:
- Destination directory specification causes files to be uploaded into
a specific directory relative to experiment root path
- Introduced framework for higher level beamline specific tools
- New commands that combine adding experiment and running DAQs or
uploads
- Introduced sphinx as python API documentation framework
- Resolved possible timeout issue when starting DAQ or directory upload
with a directory containing large number of files
- Simplified data directory command line option for beamlines that use
gridftp (via DM_DATA_DIRECTORY_MAP environment variable)
Release 1.0 (01/31/2017)
=============================
- Introduced concept of experiment station and redesigned authorization
mechanisms to allow beamline managers to manage their stations; all
APIs and CLIs now conform to the new authorization scheme
- Modified get-experiments utility to allow retrieving list of experiments
for a given station
- Cleaned up web portal by removing unused views, and enabled station
management functionality
- GPFS DDN (extrepid) has replaced xstor as the main APS storage
- CLI changes:
- add-experiment command requires station name (can be set from env.
variable); experiment type can be specified using type name
- get-experiments command requires station name for beamline managers (can
be set from env. variable)
- start-experiment command is now optional
Release 0.15 (11/01/2016)
=============================
- Resolved issue with incorrect accounting of processing errors for DAQs
- Improved DAQ processing algorithm to avoid resource starvation between
simultaneous DAQs and uploads
- Enhanced monitoring status information for both DAQs and uploads
Release 0.14 (10/14/2016)
=============================
- Introduced new framework and utilities for synchronizing users with
APS DB
- Resolved several issues with special characters in file names for
gridftp transfer plugin
Release 0.13 (05/27/2016)
=============================
- Added SFTP file system observer agent
- Enhanced MongoDB plugin with file md5 sum calculation
Release 0.12 (05/06/2016)
=============================
- Developed processing for HDF5 metadata in Mongo cataloging plugin
- Modified catalog API and service interfaces to use file collections on
a per-experiment basis
Release 0.11 (04/29/2016)
=============================
- Resolved issue with upload command for directories containing large
number of files
- Implemented enhanced upload processing algorithm to avoid resource
starvation between simultaneous DAQs and uploads
- Added new polling file system observer agent as option for monitoring
directories
- Reworked catalog API and corresponding MongoDB interfaces to use unique
experiment file paths, rather than file names
Release 0.10 (03/11/2016)
=============================
- Added dm-list-daqs and dm-list-uploads commands
- Resolved issue with newly created directories treated as files for
real-time data acquisitions
Release 0.9 (02/25/2016)
=============================
- Developed directory processing mode for uploads; in this mode file transfer
plugins transfer entire directories as opposed to individual files
- Added dm-get-processing-plugins command
- Resolved working directory issue that may occur with simutaneous uploads
Release 0.8 (01/26/2016)
=============================
- Enhanced upload/daq performance and functionality (hidden files are not
processed; for uploads system can detect files that had been processed
already; improved handling and reporting of processing errors)
- Source file checksum is calculated for rsync/gridftp plugins by default
- Added dm-stop-upload command
- Resolved globus online user authorization delay issue
Release 0.7 (12/08/2015)
=============================
- Introduced framework and user interfaces for tracking progress of file
uploads and data acquisitions in DAQ service
- Added ability to monitor multiple directories for the same experiment
simultaneously (required changes to DAQ service REST interfaces)
- Enhanced start/stop DAQ and upload commands to use DM_FILE_SERVER_URL
environment variable
- Added user interfaces and utilities that enable experiment data download
from machines that have SSH access to the storage host
Release 0.6 (11/6/2015)
=============================
- Added file system observer agent interface for DAQ service
- Implemented FTP file system observer for DAQ service
- Added interfaces for deleting user experiment role in DS service
- Introduced java REST API framework, and specific experiment DS service API
- Web Portal notifies DS service about experiment user modifications
Release 0.5 (10/08/2015)
=============================
- Implemented Single Sign-On solution for backend services
- Enabled user authentication via login file
- Added file stat (with checksum) interface in DS web service
- After adding user role to experiment via command line, user is also
added to experiment group (if one exists)
- Added rsync file transfer plugin with checksum and delete
Release 0.4 (09/21/2015)
=============================
- Number of minor modifications made in preparation for test deployment at
beamlines
Release 0.3 (07/22/2015)
=============================
- Developed initial version of Catalogging Web Service based on MongoDB
- Developed sample processing plugins: file metadata catalog, SDDS processing,
SGE job submission
Release 0.2 (06/30/2015)
=============================
- Implemented storage permission management and user group management
- Developed common file processing service plugin framework
Release 0.1 (04/21/2015)
=============================
- Functional web portal (user, experiment, and policy pages)
- Developed web service and its API/CLI frameworks
- Developed initial version of Data Storage Web Service
- Developed initial version of Data Acquisition Web Service;
- DAQ service can monitor file system on a detector node and subsequently
transfer data to storage
File added
"1-BM-B,C"
"1-ID-B,C,E"
"2-BM-A,B"
"2-ID-D"
"2-ID-E"
"3-ID-B,C,D"
"4-ID-C"
"4-ID-D"
"5-BM-C"
"5-BM-D"
"5-ID-B,C,D"
"6-BM-A,B"
"6-ID-B,C"
"6-ID-D"
"7-BM-B"
"7-ID-B,C,D"
"8-BM-B"
"8-ID-E"
"8-ID-I"
"9-BM-B,C"
"9-ID-B,C"
"10-BM-A,B"
"10-ID-B"
"11-BM-B"
"11-ID-B"
"11-ID-C"
"11-ID-D"
"12-BM-B"
"12-ID-B"
"12-ID-C,D"
"13-BM-C"
"13-BM-D"
"13-ID-C,D"
"13-ID-E"
"14-BM-C"
"14-ID-B"
"15-ID-B,C,D"
"16-BM-B"
"16-BM-D"
"16-ID-B"
"16-ID-D"
"17-BM-B"
"17-ID-B"
"18-ID-D"
"19-BM-D"
"19-ID-D"
"20-BM-B"
"20-ID-B,C"
"21-ID-D"
"21-ID-E"
"21-ID-F"
"21-ID-G"
"22-BM-D"
"22-ID-D"
"23-BM-B"
"23-ID-B"
"23-ID-D"
"24-ID-C"
"24-ID-E"
"26-ID-C"
"27-ID-B"
"29-ID-C,D"
"30-ID-B,C"
"31-ID-D"
"32-ID-B,C"
"33-BM-C"
"33-ID-D,E"
"34-ID-C"
"34-ID-E"
"35-ID-B,C,D,E"
# Need 1 terminal on DAQ node, 1 terminal on HPC node and 2 terminals on DS
# node
#
# SCENARIO 1: BASIC UPLOAD
#
# ssh -X dm@dmstorage: start services
cd /opt/DM/dev
source setup.sh
./etc/init.d/dm-postgresql start
./etc/init.d/dm-glassfish start
./etc/init.d/dm-mongodb start
./etc/init.d/dm-ds-web-service start
./etc/init.d/dm-cat-web-service start
# ssh -X dm@dmdaq: start services
cd /opt/DM/dev
source setup.sh
./etc/init.d/dm-daq-web-service start
# ssh -X dm@dmhpc: start services, check NFS
cd /opt/DM/dev
source setup.sh
./etc/init.d/dm-daq-web-service start
ls -l /net/dmstorage/opt/DM
#####################################################################
# dm@dmstorage: check directory content on the storage node
ls -l /opt/DM/data
# ssh sveseli@dmstorage: add and start experiment e1
source /opt/DM/etc/dm.setup.sh
dm-add-experiment --name e1 --type-id 1
dm-start-experiment --name e1
# dm@dmstorage: check directory content on the storage node
# note that experiment directory permissions are restricted
ls -l /opt/DM/data/ESAF
ls -l /opt/DM/data/ESAF/e1/
# ssh dm@dmdaq: source setup file, show test data
source /opt/DM/etc/dm.setup.sh
ls -lR /opt/DM/experiments/e1
cat /opt/DM/experiments/e1/file1
# dm@dmdaq: upload data for experiment e1
dm-upload --experiment e1 --data-directory /opt/DM/experiments/e1
# dm@dmstorage: check experiment storage directory content
# note permissions, ownership
ls -l /opt/DM/data/ESAF/e1/
ls -l /opt/DM/data/ESAF/e1/2015/07/09/
#
# SCENARIO 2: UPLOAD + METADATA CATALOG
#
# sveseli@dmstorage: get metadata for experiment files from catalogging service
dm-get-experiment-files --experiment e1
dm-get-experiment-file --experiment e1 --file file2 --display-keys=__all__
# dm@dmdaq: upload data for experiment e1, this time specify extra keys
dm-upload --experiment e1 --data-directory /opt/DM/experiments/e1 ownerUser:JohnC ownerGroup:APSU memo1:ApprovedByNDA memo2:DislikedByGD
# sveseli@dmstorage: get metadata for file 2 again
dm-get-experiment-file --experiment e1 --file file2 --display-keys=__all__
# sveseli@dmstorage: show metadata updates
dm-update-experiment-file --experiment e1 --file file3 quality:A --display-keys=id,fileName,quality
# sveseli@dmstorage: show metadata search
dm-get-experiment-files --experiment e1 quality:A
dm-get-experiment-files --experiment e1 storageFilePath:2015
#
# SCENARIO 3: UPLOAD + METADATA CATALOG + SDDS PARAMETERS
#
# sveseli@dmstorage: add and start experiment mm1
dm-add-experiment --name mm1 --type-id 1
dm-start-experiment --name mm1
# dm@dmdaq: upload data for experiment mm1, and request SDDS parameter
# processing
ls -lR /opt/DM/experiments/mm1
dm-upload --experiment mm1 --data-directory /opt/DM/experiments/mm1 ownerUser:JohnC ownerGroup:APSU processSddsParameters:True
# sveseli@dmstorage: get mm1 files, observe SDDS parameters
dm-get-experiment-files --experiment mm1
dm-get-experiment-files --experiment mm1 --display-keys=__all__ --display-format=dict
# dm@dmstorage: compare with sddsprintout (permissions do not allow sveseli
# account to access file)
export PATH=$PATH:/opt/epics/extensions/bin/linux-x86_64/
sddsprintout -parameters /opt/DM/data/ESAF/mm1/hallProbeScan-M1Proto-000000072-0009-000000.edf
#
# SCENARIO 4: UPLOAD + METADATA CATALOG + SDDS PARAMETERS + SCRIPT PROCESSING
#
# dm@dmstorage: show processing script
cat /opt/DM/processing/find_sdds_row_count.sh
/opt/DM/processing/find_sdds_row_count.sh /opt/DM/data/ESAF/mm1/hallProbeScan-M1Proto-000000072-0009-000000.edf
# sveseli@dmstorage: get mm1 files, note no key processingScriptOutput
dm-get-experiment-files --experiment mm1 --display-keys=fileName,processingScriptOutput
# dm@dmdaq: upload data for experiment mm1, request SDDS parameter
# processing, specify processing script
dm-upload --experiment mm1 --data-directory /opt/DM/experiments/mm1 processSddsParameters:True processingScript:/opt/DM/processing/find_sdds_row_count.sh
# sveseli@dmstorage: get mm1 files, note present key processingScriptOutput
dm-get-experiment-files --experiment mm1 --display-keys=fileName,processingScriptOutput
#
# SCENARIO 5: UPLOAD + METADATA CATALOG + SDDS PARAMETERS + HPC PROCESSING
#
# dm@dmstorage: show processing script
more /opt/DM/processing/sge_sdds_analysis.sh
# dm@dmstorage: show no png files in experiment directory
ls -l /opt/DM/data/ESAF/mm1/*.png
# dm@dmhpc: show empty home directory
cd
ls -l
# dm@dmhpc: show qstat
source /opt/sge/default/common/settings.sh
qstat -f
watch -d 'qstat -f'
# sveseli@dmstorage: get mm1 files, note only 1 file
dm-get-experiment-files --experiment mm1
# dm@dmdaq: upload data for experiment mm1, request SDDS parameter
# processing, specify SGE processing script
dm-upload --experiment mm1 --data-directory /opt/DM/experiments/mm1 processSddsParameters:True sgeJobScript:/opt/DM/processing/sge_sdds_analysis.sh
# sveseli@dmstorage: get mm1 files, note 2 files
dm-get-experiment-files --experiment mm1
# sveseli@dmstorage: get mm1 .png files, note parentFile key
dm-get-experiment-files --experiment mm1 fileName:.png --display-keys=__all__
# dm@dmhpc: show SGE output in home directory
ls -l
# dm@dmstorage: open processed file
xdg-open /opt/DM/data/ESAF/mm1/hallProbeScan-M1Proto-000000072-0009-000000.edf.png
#
# SCENARIO 6: DAQ + METADATA CATALOG + SDDS PARAMETERS + HPC PROCESSING
#
# sveseli@dmstorage: add and start experiment mm2
dm-add-experiment --name mm2 --type-id 1
dm-start-experiment --name mm2
# sveseli@dmstorage: get mm2 files, note no files
dm-get-experiment-files --experiment mm2
# dm@dmstorage: show no png files in experiment directory
ls -l /opt/DM/data/ESAF/mm2/*.png
# dm@dmstorage: tail log file to observe processing
tail -f /opt/DM/var/log/dm.ds-web-service.log
# dm@dmdaq: start DAQ for experiment mm2, request SDDS parameter
# processing, specify SGE processing script
rm -rf /tmp/data/mm2
mkdir -p /tmp/data/mm2
dm-start-daq --experiment mm2 --data-directory /tmp/data/mm2 processSddsParameters:True sgeJobScript:/opt/DM/processing/sge_sdds_analysis.sh
# dm@dmhpc: show qstat
watch -d 'qstat -f'
# dm@dmdaq: copy experiment mm2 files into observed directory, watch qstat
ls -l /opt/DM/experiments/mm2/
cp /opt/DM/experiments/mm2/* /tmp/data/mm2/ && sleep 5 && touch /tmp/data/mm2/* &
tail -f /opt/DM/var/log/dm.daq-web-service.log
# sveseli@dmstorage: get mm2 files, note original + processed files
dm-get-experiment-files --experiment mm2
# dm@dmstorage: show png files in experiment directory
ls -l /opt/DM/data/ESAF/mm2/*.png
# dm@dmdaq: stop DAQ for experiment mm2
dm-stop-daq --experiment mm2
#
# SCENARIO 7: DATASET DEFINITION
#
# sveseli@dmstorage: add metadata for couple of experiment e2 files
# with different keys
dm-add-experiment-file --experiment e2 --file x1 status:good
dm-add-experiment-file --experiment e2 --file y1 status:bad
dm-get-experiment-files --experiment e2 --display-keys=fileName,status
# sveseli@dmstorage: add dataset metadata
dm-add-experiment-dataset --experiment e2 --dataset d1 status:g.*
# sveseli@dmstorage: get dataset files, note only one file matches
dm-get-experiment-dataset-files --experiment e2 --dataset d1
# sveseli@dmstorage: add metadata for anothare e2 file that
# should match dataset constraint
dm-add-experiment-file --experiment e2 --file x2 status:great
dm-get-experiment-files --experiment e2 --display-keys=fileName,status
# sveseli@dmstorage: get dataset files, note two files match
dm-get-experiment-dataset-files --experiment e2 --dataset d1
# Demo environment consists of three linux VMs:
# - data acquisition (DAQ), data storage (DS), sge cluster (HPC) nodes
# - CentOS 6.6, 64-bit
# - no shared storage
# - DS node runs PostgreSQL database server, Web Portal, DS Web Service,
# CAT Web Service, MongoDB server
# - DAQ node runs DAQ Web Service
# - HPC node runs SGE cluster
# Machine Preparation
# ===================
# install dependencies (all machines)
yum install -y gcc libgcc expect zlib-devel openssl-devel openldap-devel subversion make sed gawk autoconf automake wget readline-devel
# Download globus RPM repo and install gridftp (both machines)
# http://toolkit.globus.org/ftppub/gt6/installers/repo/globus-toolkit-repo-latest.noarch.rpm
yum install globus-gridftp
# Disable requiredtty in /etc/sudoers
# Prepare gridftp server to use sshd (dmstorage machine)
globus-gridftp-server-enable-sshftp
# create system (dm) account on both machines, configure ssh-keys and
# authorized_keys files
# create several user accounts (dmstorage machine): dmuser1, dmuser2, dmuser3
# build and install epics base and SDDS/SDDSepics extensions under
# /opt/epics (dmstorage machine)
# build SDDS python under /opt/epics/extensions/src/SDDS/python/
# copy sdds.py into /opt/DM/support/python/linux-x86_64/lib/python2.7/
# copy /opt/epics/extensions/src/SDDS/python/O.linux-x86_64/sddsdatamodule.so
# into /opt/DM/support/python/linux-x86_64/lib/python2.7/lib-dynload/
# export /opt/DM to dmhpc node
# yum install nfs-util
# edit /etc/exports and add /opt/DM 192.168.100.8(rw,sync)
# exportfs -a
# restart nfs
# install sge on hpc machine, add dmstorage as submission node,
# copy /opt/sge to dmstorage
# configure /opt/DM area for software installation
mkdir -p /opt/DM
chown -R dm.dm /opt/DM
chmod 755 /opt/DM
# configure (or disable) firewall (both machines)
/etc/init.d/iptables stop
# DM Deployment: DS Machine
# =========================
# Log into dmstorage node and create local DM deployment directory
# in dm user home area
cd /opt/DM
ls -l
# Checkout code as release 0.2
svn co https://subversion.xray.aps.anl.gov/DataManagement/trunk dev
# Build support area
cd dev
make support
# Source setup
source setup.sh
# Create db
make db
# Configure Web Portal
# Note:
# - this needs to be done only during the first portal deployment,
# or after portal has been unconfigured explicitly
# - this step configures DB access
# - adds initial DM system user to the DB
make configure-web-portal
# Add few users
#dm-add-user --username dmuser1 --first-name Test --last-name User1
#dm-add-user --username dmuser2 --first-name Test --last-name User2
#dm-add-user --username dmuser3 --first-name Test --last-name User3
# Deploy Web Portal
# Note:
# - deploys portal war file into glassfish
# - after this step, users can access portal at
# https://dmstorage.svdev.net:8181/dm
make deploy-web-portal
# Deploy DS Web Service
# Note:
# - generates SSL certificates and configuration files
# - after this step, DS web service is accessible at port 22236
# - log files are under DM/var/log
# - configuration files are under DM/etc
# - user setup file is DM/etc/dm.setup.sh
# - service control script is under DM/dm-0.2/etc/init.d
make deploy-ds-web-service
# Check functionality. Open second terminal and log into dmstorage node
# as user sveseli
# Source setup file to get access to DM commands
source /opt/DM/etc/dm.setup.sh
# Get user list as administrator (dm) account
dm-get-users
# DM Deployment: DAQ Machine/HPC Machine
# ======================================
# Log into dmdaq node and create local DM deployment directory
# in dm user home area
cd /opt/DM
ls -l
# Checkout code as release 0.2
svn co https://subversion.xray.aps.anl.gov/DataManagement/trunk dev
# Build support area
# Note the following:
# - since demo machines are identical, we could simply copy support/dm code
# from the storage node; this is not necessarily the case in general
# - support area and DM code distribution can be shared between DAQ and DS
# nodes
# - support area on the daq node is much lighter (i.e., no need
# for glassfish, etc.)
cd dev
make support-daq
# Source setup
source setup.sh
# Deploy DAQ Web Service
# Note:
# - requires storage node to be installed
# - generates SSL certificates and configuration files
# - after this step, DAQ web service is accessible at port 33336
# - log files are under DM/var/log
# - configuration files are under DM/etc
# - user setup file is DM/etc/dm.setup.sh
make deploy-daq-web-service
# Need 1 terminal on DAQ node, 1 terminal on HPC node and 2 terminals on DS
# node
#####################################################################
# Prepare ahead of time
# ssh sveseli@dmstorage: add experiment e1, mm2
source /opt/DM/etc/dm.setup.sh
dm-add-experiment --name e1 --type-id 1
dm-add-experiment --name mm2 --type-id 1
# ssh dm@dmstorage: add few users
source /opt/DM/etc/dm.setup.sh
dm-add-user --username dmuser1 --first-name Test --last-name User1
dm-add-user --username dmuser2 --first-name Test --last-name User2
dm-add-user --username dmuser3 --first-name Test --last-name User3
#####################################################################
# Initialize demo
# ssh -X dm@dmstorage: start services
cd /opt/DM/dev
source setup.sh
./etc/init.d/dm-postgresql start
./etc/init.d/dm-glassfish start
./etc/init.d/dm-mongodb start
./etc/init.d/dm-ds-web-service start
./etc/init.d/dm-cat-web-service start
# ssh -X dm@dmdaq: start services
cd /opt/DM/dev
source setup.sh
./etc/init.d/dm-daq-web-service start
# ssh -X dm@dmhpc: start services, check NFS, check SGE
cd /opt/DM/dev
source setup.sh
./etc/init.d/dm-daq-web-service start
ls -l /net/dmstorage/opt/DM
source /opt/sge/default/common/settings.sh
qstat -f
#
# Check portal: https://dmstorage.svdev.net:8181/dm
#
#####################################################################
#
# Log into portal as dm admin: https://dmstorage.svdev.net:8181/dm
#
# Show users
# Show experiments
# Add dmuser1 to experiment e1
#
# SCENARIO 2: UPLOAD + METADATA CATALOG
#
# dm@dmstorage: check directory content on the storage node, should be empty
ls -l /opt/DM/data
# dm@dmstorage: check dmuser1 user, note list of groups
id dmuser1
# ssh sveseli@dmstorage: start experiment e1
source /opt/DM/etc/dm.setup.sh
dm-start-experiment --name e1
# dm@dmstorage: check directory content on the storage node
# note that experiment directory permissions are restricted
ls -l /opt/DM/data/ESAF
ls -l /opt/DM/data/ESAF/e1/
# dm@dmstorage: check dmuser1 user, note user belongs to new experiment group
id dmuser1
# sveseli@dmstorage: show there are no experiment files in catalogging service
dm-get-experiment-files --experiment e1
# ssh dm@dmdaq: source setup file, show test data
source /opt/DM/etc/dm.setup.sh
ls -lR /opt/DM/experiments/e1
cat /opt/DM/experiments/e1/file1
# dm@dmdaq: upload data for experiment e1, specify few arbitrary keys
dm-upload --experiment e1 --data-directory /opt/DM/experiments/e1 ownerUser:JohnC ownerGroup:APSU memo1:ApprovedByNDA memo2:DislikedByGD
# dm@dmstorage: check experiment storage directory content
# note permissions, ownership
ls -l /opt/DM/data/ESAF/e1/
ls -l /opt/DM/data/ESAF/e1/2015/07/09/
# sveseli@dmstorage: get metadata for experiment files from catalogging service
dm-get-experiment-files --experiment e1
dm-get-experiment-file --experiment e1 --file file2 --display-keys=__all__
# sveseli@dmstorage: show metadata updates
dm-update-experiment-file --experiment e1 --file file3 quality:A --display-keys=id,fileName,quality
# sveseli@dmstorage: show metadata search
dm-get-experiment-files --experiment e1 quality:A
dm-get-experiment-files --experiment e1 storageFilePath:2015
#
# SCENARIO 6: DAQ + METADATA CATALOG + SDDS PARAMETERS + HPC PROCESSING
#
# sveseli@dmstorage: add and start experiment mm2
dm-start-experiment --name mm2
# sveseli@dmstorage: get mm2 files, note no files
dm-get-experiment-files --experiment mm2
# dm@dmstorage: show no files in experiment directory
ls -l /opt/DM/data/ESAF
ls -l /opt/DM/data/ESAF/mm2
# dm@dmstorage: show processing script
more /opt/DM/processing/sge_sdds_analysis.sh
# dm@dmstorage: tail log file to observe processing
tail -f /opt/DM/var/log/dm.ds-web-service.log
# dm@dmhpc: show qstat
watch -d 'qstat -f'
# dm@dmdaq: start DAQ for experiment mm2, request SDDS parameter
# processing, specify SGE processing script
rm -rf /tmp/data/mm2
mkdir -p /tmp/data/mm2
dm-start-daq --experiment mm2 --data-directory /tmp/data/mm2 processSddsParameters:True sgeJobScript:/opt/DM/processing/sge_sdds_analysis.sh
# dm@dmdaq: copy experiment mm2 files into observed directory, watch qstat
ls -l /opt/DM/experiments/mm2/
cp /opt/DM/experiments/mm2/* /tmp/data/mm2/ && sleep 5 && touch /tmp/data/mm2/* &
tail -f /opt/DM/var/log/dm.daq-web-service.log
# sveseli@dmstorage: get mm2 files, note original + processed files
dm-get-experiment-files --experiment mm2
# dm@dmstorage: show png files in experiment directory
ls -l /opt/DM/data/ESAF/mm2/*.png
# sveseli@dmstorage: get one mm2 .edf file, note SDDS parameters in metadata
dm-get-experiment-file --experiment mm2 --file `dm-get-experiment-files --experiment mm2 --display-keys=fileName | grep -v png | head -1 | cut -f2 -d '='` --display-keys=__all__ --display-format=dict
# sveseli@dmstorage: get mm2 .png files, note parentFile key
dm-get-experiment-files --experiment mm2 fileName:.png --display-keys=fileName,parentFile --display-format=dict
# dm@dmstorage: open one processed file
xdg-open `ls -c1 /opt/DM/data/ESAF/mm2/*.png | head -1`
# dm@dmdaq: stop DAQ for experiment mm2
dm-stop-daq --experiment mm2
# Demo environment consists of three linux VMs:
# - data acquisition (DAQ), data storage (DS), sge cluster (HPC) nodes
# - CentOS 6.6, 64-bit
# - no shared storage
# - DS node runs PostgreSQL database server, Web Portal, DS Web Service,
# CAT Web Service, MongoDB server
# - DAQ node runs DAQ Web Service
# - HPC node runs SGE cluster
# Machine Preparation
# ===================
# install dependencies (all machines)
yum install -y gcc libgcc expect zlib-devel openssl-devel openldap-devel subversion make sed gawk autoconf automake wget readline-devel
# Download globus RPM repo and install gridftp (both machines)
# http://toolkit.globus.org/ftppub/gt6/installers/repo/globus-toolkit-repo-latest.noarch.rpm
yum install globus-gridftp
# Disable requiredtty in /etc/sudoers
# Prepare gridftp server to use sshd (dmstorage machine)
globus-gridftp-server-enable-sshftp
# create system (dm) account on both machines, configure ssh-keys and
# authorized_keys files
# create several user accounts (dmstorage machine): dmuser1, dmuser2, dmuser3
# build and install epics base and SDDS/SDDSepics extensions under
# /opt/epics (dmstorage machine)
# build SDDS python under /opt/epics/extensions/src/SDDS/python/
# copy sdds.py into /opt/DM/support/python/linux-x86_64/lib/python2.7/
# copy /opt/epics/extensions/src/SDDS/python/O.linux-x86_64/sddsdatamodule.so
# into /opt/DM/support/python/linux-x86_64/lib/python2.7/lib-dynload/
# export /opt/DM to dmhpc node
# yum install nfs-util
# edit /etc/exports and add /opt/DM 192.168.100.8(rw,sync)
# exportfs -a
# restart nfs
# install sge on hpc machine, add dmstorage as submission node,
# copy /opt/sge to dmstorage
# configure /opt/DM area for software installation
mkdir -p /opt/DM
chown -R dm.dm /opt/DM
chmod 755 /opt/DM
# configure (or disable) firewall (both machines)
/etc/init.d/iptables stop
# DM Deployment: DS Machine
# =========================
# Log into dmstorage node and create local DM deployment directory
# in dm user home area
cd /opt/DM
ls -l
# Checkout code as release 0.2
svn co https://subversion.xray.aps.anl.gov/DataManagement/trunk dev
# Build support area
cd dev
make support
# Source setup
source setup.sh
# Create db
make db
# Configure Web Portal
# Note:
# - this needs to be done only during the first portal deployment,
# or after portal has been unconfigured explicitly
# - this step configures DB access
# - adds initial DM system user to the DB
make configure-web-portal
# Add few users
#dm-add-user --username dmuser1 --first-name Test --last-name User1
#dm-add-user --username dmuser2 --first-name Test --last-name User2
#dm-add-user --username dmuser3 --first-name Test --last-name User3
# Deploy Web Portal
# Note:
# - deploys portal war file into glassfish
# - after this step, users can access portal at
# https://dmstorage.svdev.net:8181/dm
make deploy-web-portal
# Deploy DS Web Service
# Note:
# - generates SSL certificates and configuration files
# - after this step, DS web service is accessible at port 22236
# - log files are under DM/var/log
# - configuration files are under DM/etc
# - user setup file is DM/etc/dm.setup.sh
# - service control script is under DM/dm-0.2/etc/init.d
make deploy-ds-web-service
# Check functionality. Open second terminal and log into dmstorage node
# as user sveseli
# Source setup file to get access to DM commands
source /opt/DM/etc/dm.setup.sh
# Get user list as administrator (dm) account
dm-get-users
# DM Deployment: DAQ Machine/HPC Machine
# ======================================
# Log into dmdaq node and create local DM deployment directory
# in dm user home area
cd /opt/DM
ls -l
# Checkout code as release 0.2
svn co https://subversion.xray.aps.anl.gov/DataManagement/trunk dev
# Build support area
# Note the following:
# - since demo machines are identical, we could simply copy support/dm code
# from the storage node; this is not necessarily the case in general
# - support area and DM code distribution can be shared between DAQ and DS
# nodes
# - support area on the daq node is much lighter (i.e., no need
# for glassfish, etc.)
cd dev
make support-daq
# Source setup
source setup.sh
# Deploy DAQ Web Service
# Note:
# - requires storage node to be installed
# - generates SSL certificates and configuration files
# - after this step, DAQ web service is accessible at port 33336
# - log files are under DM/var/log
# - configuration files are under DM/etc
# - user setup file is DM/etc/dm.setup.sh
make deploy-daq-web-service
# Demo environment consists of two linux VMs:
# - data acquisition (DAQ) and data storage (DS) nodes
# - CentOS 6.6, 64-bit
# - no shared storage
# - DS node runs database server, Web Portal and DS Web Service
# - DAQ node runs DAQ Web Service
# Machine Preparation
# ===================
# install dependencies (both machines)
yum install -y gcc libgcc expect zlib-devel openssl-devel openldap-devel subversion make sed gawk autoconf automake wget readline-devel
# create system (dm) account on both machines, configure ssh-keys and
# authorized_keys files
# configure /opt/DM area for software installation
mkdir -p /opt/DM
chown -R dm.dm /opt/DM
chmod 755 /opt/DM
# configure (or disable) firewall (both machines)
/etc/init.d/iptables stop
# DM Deployment: DS Machine
# =========================
# Log into dmstorage node and create local DM deployment directory
# in dm user home area
cd /opt/DM
ls -l
# Checkout code as release 0.1
svn co https://subversion.xray.aps.anl.gov/DataManagement/tags/20150421 dm-0.1
# Build support area
cd dm-0.1
make support
# Source setup
source setup.sh
# Create db
make db
# Configure Web Portal
# Note:
# - this needs to be done only during the first portal deployment,
# or after portal has been unconfigured explicitly
# - this step configures DB access
make configure-web-portal
# Deploy Web Portal
# Note:
# - deploys portal war file into glassfish
# - after this step, users can access portal at
# https://dmstorage.svdev.net:8181/dm
make deploy-web-portal
# Deploy DS Web Service
# Note:
# - generates SSL certificates and configuration files
# - after this step, DS web service is accessible at port 22236
# - log files are under DM/var/log
# - configuration files are under DM/etc
# - user setup file is DM/etc/dm.setup.sh
# - service control script is under DM/dm-0.1/etc/init.d
make deploy-ds-web-service
# Check functionality. Open second terminal and log into dmstorage node
# as user sveseli
# Source setup file to get access to DM commands
source /opt/DM/etc/dm.setup.sh
# Attempt to get list of users as user sveseli, should result
# in authorization error
# Note:
# - every command comes with common set of options
dm-get-users -h
dm-get-users --version
dm-get-users
echo $?
# Repeat command, this time us administrator (dm) account
dm-get-users
# Repeat command, note that session with DS service has been established, so no
# more password prompts until session expires
cat ~/.dm/.ds.session.cache
dm-get-users
# DM Deployment: DAQ Machine
# ==========================
# Log into dmdaq node and create local DM deployment directory
# in dm user home area
cd /opt/DM
ls -l
# Checkout code as release 0.1
svn co https://subversion.xray.aps.anl.gov/DataManagement/tags/20150421 dm-0.1
# Build support area
# Note the following:
# - since demo machines are identical, we could simply copy support/dm code
# from the storage node; this is not necessarily the case in general
# - support area and DM code distribution can be shared between DAQ and DS
# nodes
# - support area on the daq node is much lighter (i.e., no need
# for glassfish, etc.)
cd dm-0.1
make support-daq
# Source setup
source setup.sh
# Deploy DAQ Web Service
# Note:
# - requires storage node to be installed
# - generates SSL certificates and configuration files
# - after this step, DAQ web service is accessible at port 33336
# - log files are under DM/var/log
# - configuration files are under DM/etc
# - user setup file is DM/etc/dm.setup.sh
make deploy-daq-web-service
# DM Functionality: DAQ
# =====================
# add new experiment (sveseli@dmstorage)
dm-add-experiment -h
dm-add-experiment --name exp1 --type-id 1 --description test
dm-get-experiments
dm-get-experiment --name exp1
dm-get-experiment --name exp1 --display-keys=__all__
# check directory content on the storage node (dm@dmstorage)
ls -l /opt/DM/data
# start experiment (sveseli@dmstorage)
dm-start-experiment --name exp1
# check directory content on the storage node (dm@dmstorage)
ls -l /opt/DM/data
ls -l /opt/DM/data/ESAF
ls -l /opt/DM/data/ESAF/exp1/
# at this point we can log into the portal to see experiment that was created
# observe that start time is entered correctly
# in the first terminal on the daq node, tail log file (dm@dmdaq)
tail -f /opt/DM/var/log/dm.daq-web-service.log
# open second terminal for daq node, login as system (dm) user
# source setup file (dm@dmdaq)
cat /opt/DM/etc/dm.setup.sh
source /opt/DM/etc/dm.setup.sh
# prepare DAQ directory for this experiment (dm@dmdaq)
mkdir -p /tmp/data/exp1
# start DAQ (dm@dmdaq)
dm-start-daq -h
dm-start-daq --experiment exp1 --data-directory /tmp/data/exp1
# create test file in the DAQ directory (daq node)
# observe log file entries, point out file transfer
touch /tmp/data/exp1/file1
echo "Hello there, data management is here" > /tmp/data/exp1/file1
# check directory content on the storage node (dm@dmstorage)
# file1 should be transferred
ls -l /opt/DM/data/ESAF/exp1/
# stop DAQ (dm@dmdaq)
dm-stop-daq -h
dm-stop-daq --experiment exp1
# DM Functionality: Upload
# ========================
# prepare data directory we want to upload (dm@dmdaq)
mkdir -p /tmp/data/exp1/2015/04/21
echo "this is file 2" > /tmp/data/exp1/2015/04/21/file2
echo "this is file 3" > /tmp/data/exp1/2015/04/21/file3
# check directory content on the storage node (dm@dmstorage)
ls -l /opt/DM/data/ESAF/exp1/
# upload data (dm@dmdaq)
dm-upload -h
dm-upload --experiment exp1 --data-directory /tmp/data/exp1
# check directory content on the storage node (dm@dmstorage)
ls -l /opt/DM/data/ESAF/exp1/
ls -l /opt/DM/data/ESAF/exp1/2015/04/21/
cat /opt/DM/data/ESAF/exp1/2015/04/21/file3
# stop experiment (sveseli@dmstorage)
dm-stop-experiment --name exp1
# at this point we can log into the portal to see modified experiment
# observe that end time is entered correctly
# Demo environment consists of two linux VMs:
# - data acquisition (DAQ) and data storage (DS) nodes
# - CentOS 6.6, 64-bit
# - no shared storage
# - DS node runs database server, Web Portal and DS Web Service
# - DAQ node runs DAQ Web Service
# Machine Preparation
# ===================
# install dependencies (both machines)
yum install -y gcc libgcc expect zlib-devel openssl-devel openldap-devel subversion make sed gawk autoconf automake wget readline-devel
# Download globus RPM repo and install gridftp (both machines)
# http://toolkit.globus.org/ftppub/gt6/installers/repo/globus-toolkit-repo-latest.noarch.rpm
yum install globus-gridftp
# Disable requiredtty in /etc/sudoers
# Prepare gridftp server to use sshd (dmstorage machine)
globus-gridftp-server-enable-sshftp
# create system (dm) account on both machines, configure ssh-keys and
# authorized_keys files
# create several user accounts (dmstorage machine): dmuser1, dmuser2, dmuser3
# build and install epics base and SDDS/SDDSepics extensions under
# /opt/epics (dmstorage machine)
# configure /opt/DM area for software installation
mkdir -p /opt/DM
chown -R dm.dm /opt/DM
chmod 755 /opt/DM
# configure (or disable) firewall (both machines)
/etc/init.d/iptables stop
# DM Deployment: DS Machine
# =========================
# Log into dmstorage node and create local DM deployment directory
# in dm user home area
cd /opt/DM
ls -l
# Checkout code as release 0.2
svn co https://subversion.xray.aps.anl.gov/DataManagement/tags/20150630 dm-0.2
# Build support area
cd dm-0.2
make support
# Source setup
source setup.sh
# Create db
make db
# Configure Web Portal
# Note:
# - this needs to be done only during the first portal deployment,
# or after portal has been unconfigured explicitly
# - this step configures DB access
# - adds initial DM system user to the DB
make configure-web-portal
# The above step used two new utilities that go directly to the db:
dm-add-user -h
dm-add-user-system-role -h
# Add few users
dm-add-user --username dmuser1 --first-name Test --last-name User1
dm-add-user --username dmuser2 --first-name Test --last-name User2
dm-add-user --username dmuser3 --first-name Test --last-name User3
# Deploy Web Portal
# Note:
# - deploys portal war file into glassfish
# - after this step, users can access portal at
# https://dmstorage.svdev.net:8181/dm
make deploy-web-portal
# Show no sudo functionality for DM account
sudo -l
# Deploy DS Web Service
# Note:
# - generates SSL certificates and configuration files
# - after this step, DS web service is accessible at port 22236
# - log files are under DM/var/log
# - configuration files are under DM/etc
# - user setup file is DM/etc/dm.setup.sh
# - service control script is under DM/dm-0.2/etc/init.d
make deploy-ds-web-service
# Show sudo functionality for DM account that enables group/permission
# management
sudo -l
# Check functionality. Open second terminal and log into dmstorage node
# as user sveseli
# Source setup file to get access to DM commands
source /opt/DM/etc/dm.setup.sh
# Get user list as administrator (dm) account
dm-get-users
# DM Deployment: DAQ Machine
# ==========================
# Log into dmdaq node and create local DM deployment directory
# in dm user home area
cd /opt/DM
ls -l
# Checkout code as release 0.2
svn co https://subversion.xray.aps.anl.gov/DataManagement/tags/20150630 dm-0.2
# Build support area
# Note the following:
# - since demo machines are identical, we could simply copy support/dm code
# from the storage node; this is not necessarily the case in general
# - support area and DM code distribution can be shared between DAQ and DS
# nodes
# - support area on the daq node is much lighter (i.e., no need
# for glassfish, etc.)
cd dm-0.2
make support-daq
# Source setup
source setup.sh
# Deploy DAQ Web Service
# Note:
# - requires storage node to be installed
# - generates SSL certificates and configuration files
# - after this step, DAQ web service is accessible at port 33336
# - log files are under DM/var/log
# - configuration files are under DM/etc
# - user setup file is DM/etc/dm.setup.sh
make deploy-daq-web-service
# DM Functionality: DAQ
# =====================
# add new experiment and couple of users (sveseli@dmstorage)
dm-add-experiment --name exp1 --type-id 1 --description test
dm-add-user-experiment-role --username dmuser1 --experiment exp1 --role=User
dm-add-user-experiment-role --username dmuser2 --experiment exp1 --role=User
# Note that dmuser1 and 2 are on the list of experiment users
dm-get-experiments
dm-get-experiment --name exp1 --display-keys=__all__
# check directory content on the storage node (dm@dmstorage)
ls -l /opt/DM/data
# Show that unix account corresponding to dmuser1 has no special groups
# associated with it
id dmuser1
# Show there is no exp1 unix group
grep exp1 /etc/group
# start experiment (sveseli@dmstorage)
dm-start-experiment --name exp1
# Show there is now exp1 unix group
grep exp1 /etc/group
# check directory content on the storage node (dm@dmstorage)
# note that experiment directory permissions are restricted
ls -l /opt/DM/data/ESAF
ls -l /opt/DM/data/ESAF/exp1/
# Check experiment user groups: only 1 and 2 should have new group assigned
# to them
id dmuser1
id dmuser2
id dmuser3
# in the first terminal on the storage node, tail log file (dm@dmdstorage)
tail -f /opt/DM/var/log/dm.ds-web-service.log
# in the first terminal on the daq node, tail log file (dm@dmdaq)
tail -f /opt/DM/var/log/dm.daq-web-service.log
# open second terminal for daq node, login as system (dm) user
# source setup file (dm@dmdaq)
source /opt/DM/etc/dm.setup.sh
# prepare DAQ directory for this experiment (dm@dmdaq)
mkdir -p /tmp/data/exp1
# create test file in the DAQ directory (daq node)
# observe log file entries, point out file transfer
echo "Hello there, data management is here" > /tmp/data/exp1/file1
# check directory content on the storage node (dm@dmstorage)
# file1 should be transferred
ls -l /opt/DM/data/ESAF/exp1/
# upload data (dm@dmdaq)
dm-upload --experiment exp1 --data-directory /tmp/data/exp1
# check directory content on the storage node (dm@dmstorage)
# file1 should be transferred
# note permissions
ls -l /opt/DM/data/ESAF/exp1/
# as root@dmstorage, su into dmuser1 account and try to read data
# should work
cat /opt/DM/data/ESAF/exp1/file1
# as root@dmstorage, su into dmuser3 account and try to read data
# should fail
cat /opt/DM/data/ESAF/exp1/file1
# Demonstrate retries: show config file
vi /opt/DM/etc/dm.daq-web-service.conf
# As root@dmdaq, temporarily move rsync
mv /usr/bin/rsync /usr/bin/rsync.orig
# upload new data (dm@dmdaq), observe how transfer fails
echo "Hello there, data management is here again" > /tmp/data/exp1/file1
dm-upload --experiment exp1 --data-directory /tmp/data/exp1
# As root@dmdaq, restore rsync, observe how transfer succeeds
mv /usr/bin/rsync.orig /usr/bin/rsync
# check directory content on the storage node (dm@dmstorage)
# file1 should be transferred
ls -l /opt/DM/data/ESAF/exp1/
# Demonstrate gridftp plugin
# Edit config file as dm@dmdaq, comment out rsync plugin, uncomment gridftp
# plugin; restart service
vi /opt/DM/etc/dm.daq-web-service.conf
./etc/init.d/dm-daq-web-service restart
tail -f /opt/DM/var/log/dm.daq-web-service.log
# upload new data (dm@dmdaq), observe how transfer succeeds
echo "Hello there, data management is here yet again" > /tmp/data/exp1/file1
dm-upload --experiment exp1 --data-directory /tmp/data/exp1
# stop experiment (sveseli@dmstorage)
dm-stop-experiment --name exp1
cd /local/DmSystem/dm
source setup.sh
make deploy-cat-web-service
cd /local/DmSystem/support
./bin/install_node.sh
cd /local/DmSystem/support/node/linux-x86_64/bin/node_modules/mongo-express/
cp config.default.js config.js
vi config.js
export PATH=/local/DmSystem/support/node/linux-x86_64/bin:$PATH
../forever/bin/forever start app.js
../forever/bin/forever list
RHEL7 Packages
===============
make
autoconf
expect
gcc
g++
subversion
zlib-devel
openssl-devel
libffi-devel
openldap-devel
readline-devel
ncurses-devel
qt-x11
qt-postgresql
qt-devel
gtk2-devel
Globus Packages
===============
globus-openssl-module
globus-ftp-client
globus-gsi-proxy
globus-gsi-openssl
globus-ftp-control
globus-gass-copy
globus-common-16.8
globus-gass-transfer
globus-io-11.8
globus-gss-assist
globus-gsi-cert
globus-xio-popen
globus-xio-gsi
globus-gsi-credential
globus-callout-3.15
globus-gsi-sysconfig
globus-gsi-callback
globus-xio-5.14
globus-gssapi-gsi
globus-gass-copy
globus-gsi-proxy
globus-gssapi-error
build
# Minimal makefile for Sphinx documentation
#
# You can set these variables from the command line.
SPHINXOPTS =
SPHINXBUILD = sphinx-build
SPHINXPROJ = APSDataManagement
SOURCEDIR = source
BUILDDIR = build
# Put it first so that "make" without argument is like "make help".
help:
@$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
.PHONY: help Makefile
# Catch-all target: route all unknown targets to Sphinx using the new
# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS).
%: Makefile
@$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
# -*- coding: utf-8 -*-
#
# APS Data Management documentation build configuration file, created by
# sphinx-quickstart on Thu Feb 23 09:20:39 2017.
#
# This file is execfile()d with the current directory set to its
# containing dir.
#
# Note that not all possible configuration values are present in this
# autogenerated file.
#
# All configuration values have a default; values that are commented out
# serve to show the default.
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#
# import os
# import sys
# sys.path.insert(0, os.path.abspath('.'))
# -- General configuration ------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
#
# needs_sphinx = '1.0'
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = ['sphinx.ext.autodoc',
'sphinx.ext.coverage',
'sphinx.ext.ifconfig',
'sphinx.ext.viewcode']
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# The suffix(es) of source filenames.
# You can specify multiple suffix as a list of string:
#
# source_suffix = ['.rst', '.md']
source_suffix = '.rst'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = u'APS Data Management'
copyright = u'2017, APS/SDM'
author = u'APS/SDM'
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
# The short X.Y version.
version = u'1.0'
# The full version, including alpha/beta/rc tags.
release = u'1.0'
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
#
# This is also used if you do content translation via gettext catalogs.
# Usually you set "language" from the command line for these cases.
language = None
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
# This patterns also effect to html_static_path and html_extra_path
exclude_patterns = []
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# If true, `todo` and `todoList` produce output, else they produce nothing.
todo_include_todos = False
# -- Options for HTML output ----------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
#
html_theme = 'alabaster'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
#
# html_theme_options = {}
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static']
# -- Options for HTMLHelp output ------------------------------------------
# Output file base name for HTML help builder.
htmlhelp_basename = 'APSDataManagementdoc'
# -- Options for LaTeX output ---------------------------------------------
latex_elements = {
# The paper size ('letterpaper' or 'a4paper').
#
# 'papersize': 'letterpaper',
# The font size ('10pt', '11pt' or '12pt').
#
# 'pointsize': '10pt',
# Additional stuff for the LaTeX preamble.
#
# 'preamble': '',
# Latex figure (float) alignment
#
# 'figure_align': 'htbp',
}
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title,
# author, documentclass [howto, manual, or own class]).
latex_documents = [
(master_doc, 'APSDataManagement.tex', u'APS Data Management Documentation',
u'APS/SDM', 'manual'),
]
# -- Options for manual page output ---------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
(master_doc, 'apsdatamanagement', u'APS Data Management Documentation',
[author], 1)
]
# -- Options for Texinfo output -------------------------------------------
# Grouping the document tree into Texinfo files. List of tuples
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
(master_doc, 'APSDataManagement', u'APS Data Management Documentation',
author, 'APSDataManagement', 'One line description of project.',
'Miscellaneous'),
]
.. automodule:: dm.aps_bss.api
.. currentmodule:: dm.aps_bss.api
ApsBssApi
---------
.. autoclass:: dm.aps_bss.api.apsBssApi.ApsBssApi()
:show-inheritance:
:members: __init__, listRuns, getCurrentRun, listBeamlineProposals, getBeamlineProposal
.. automodule:: dm.daq_web_service.api
.. currentmodule:: dm.daq_web_service.api
ExperimentDaqApi
-----------------
.. autoclass:: dm.daq_web_service.api.experimentDaqApi.ExperimentDaqApi()
:show-inheritance:
:members: __init__, startDaq, stopDaq, listDaqs, getDaqInfo, upload, stopUpload, listUploads, getUploadInfo, listProcessingPlugins
.. APS Data Management documentation master file, created by
sphinx-quickstart on Thu Feb 23 09:20:39 2017.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
Welcome to APS Data Management's documentation!
===============================================
The `dm` package contains python APIs for accessing Data Management services.
.. toctree::
:maxdepth: 4
:caption: Contents:
dm.daq_web_service.api
dm.aps_bss.api
Indices and tables
==================
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`