Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found

Target

Select target project
  • DM/dm-docs
  • hammonds/dm-docs
  • hparraga/dm-docs
3 results
Show changes
Showing
with 491 additions and 0 deletions
#!/bin/sh
# Run command
if [ -z $DM_ROOT_DIR ]; then
cd `dirname $0` && myDir=`pwd`
setupFile=$myDir/../setup.sh
if [ ! -f $setupFile ]; then
echo "Cannot find setup file: $setupFile"
exit 1
fi
source $setupFile > /dev/null
fi
source dm_command_setup.sh
eval "$DM_ROOT_DIR/src/python/dm/cat_web_service/cli/updateExperimentFileCli.py $DM_COMMAND_ARGS"
#!/bin/sh
# Run command
if [ -z $DM_ROOT_DIR ]; then
cd `dirname $0` && myDir=`pwd`
setupFile=$myDir/../setup.sh
if [ ! -f $setupFile ]; then
echo "Cannot find setup file: $setupFile"
exit 1
fi
source $setupFile > /dev/null
fi
source dm_command_setup.sh
eval "$DM_ROOT_DIR/src/python/dm/aps_user_db/cli/updateUserFromApsDbCli.py $DM_COMMAND_ARGS"
#!/bin/sh
# Run command
if [ -z $DM_ROOT_DIR ]; then
cd `dirname $0` && myDir=`pwd`
setupFile=$myDir/../setup.sh
if [ ! -f $setupFile ]; then
echo "Cannot find setup file: $setupFile"
exit 1
fi
source $setupFile > /dev/null
fi
source dm_command_setup.sh
eval "$DM_ROOT_DIR/src/python/dm/aps_user_db/cli/updateUsersFromApsDbCli.py $DM_COMMAND_ARGS"
#!/bin/sh
# Run command
if [ -z $DM_ROOT_DIR ]; then
cd `dirname $0` && myDir=`pwd`
setupFile=$myDir/../setup.sh
if [ ! -f $setupFile ]; then
echo "Cannot find setup file: $setupFile"
exit 1
fi
source $setupFile > /dev/null
fi
source dm_command_setup.sh
eval "$DM_ROOT_DIR/src/python/dm/daq_web_service/cli/uploadCli.py $DM_COMMAND_ARGS"
#!/bin/sh
# Helper functions for DM commands.
# Fix command line arguments
DM_COMMAND_ARGS=""
while [ $# -ne 0 ]; do
arg=$1
if [[ $arg == -* ]]; then
key=`echo $arg | cut -f1 -d'='`
keyHasValue=`echo $arg | grep '='`
if [ ! -z "$keyHasValue" ]; then
value=`echo $arg | cut -f2- -d'='`
DM_COMMAND_ARGS="$DM_COMMAND_ARGS $key=\"$value\""
else
DM_COMMAND_ARGS="$DM_COMMAND_ARGS $key"
fi
else
DM_COMMAND_ARGS="$DM_COMMAND_ARGS \"$arg\""
fi
shift
done
File added
File added
This diff is collapsed.
File added
This diff is collapsed.
File added
File added
TOP = ..
SUBDIRS = sphinx
include $(TOP)/tools/make/RULES_DM
Release 1.1 (03/10/2017)
=============================
- Introduced integration with Beamline Scheduling System:
- New commands:
- list-runs
- list-proposals
- get-proposal
- Modified add-experiment command to automatically add users associated
with a given beamline proposal
- Added the following functionality for managing DAQs:
- Max. run time specification causes DAQ to be stopped automatically
- Destination directory specification causes files to be uploaded into
a specific directory relative to experiment root path
- Upload data/target directory on exit specify upload of the given
directory automatically after DAQ completes
- Added the following functionality for managing uploads:
- Destination directory specification causes files to be uploaded into
a specific directory relative to experiment root path
- Introduced framework for higher level beamline specific tools
- New commands that combine adding experiment and running DAQs or
uploads
- Introduced sphinx as python API documentation framework
- Resolved possible timeout issue when starting DAQ or directory upload
with a directory containing large number of files
- Simplified data directory command line option for beamlines that use
gridftp (via DM_DATA_DIRECTORY_MAP environment variable)
Release 1.0 (01/31/2017)
=============================
- Introduced concept of experiment station and redesigned authorization
mechanisms to allow beamline managers to manage their stations; all
APIs and CLIs now conform to the new authorization scheme
- Modified get-experiments utility to allow retrieving list of experiments
for a given station
- Cleaned up web portal by removing unused views, and enabled station
management functionality
- GPFS DDN (extrepid) has replaced xstor as the main APS storage
- CLI changes:
- add-experiment command requires station name (can be set from env.
variable); experiment type can be specified using type name
- get-experiments command requires station name for beamline managers (can
be set from env. variable)
- start-experiment command is now optional
Release 0.15 (11/01/2016)
=============================
- Resolved issue with incorrect accounting of processing errors for DAQs
- Improved DAQ processing algorithm to avoid resource starvation between
simultaneous DAQs and uploads
- Enhanced monitoring status information for both DAQs and uploads
Release 0.14 (10/14/2016)
=============================
- Introduced new framework and utilities for synchronizing users with
APS DB
- Resolved several issues with special characters in file names for
gridftp transfer plugin
Release 0.13 (05/27/2016)
=============================
- Added SFTP file system observer agent
- Enhanced MongoDB plugin with file md5 sum calculation
Release 0.12 (05/06/2016)
=============================
- Developed processing for HDF5 metadata in Mongo cataloging plugin
- Modified catalog API and service interfaces to use file collections on
a per-experiment basis
Release 0.11 (04/29/2016)
=============================
- Resolved issue with upload command for directories containing large
number of files
- Implemented enhanced upload processing algorithm to avoid resource
starvation between simultaneous DAQs and uploads
- Added new polling file system observer agent as option for monitoring
directories
- Reworked catalog API and corresponding MongoDB interfaces to use unique
experiment file paths, rather than file names
Release 0.10 (03/11/2016)
=============================
- Added dm-list-daqs and dm-list-uploads commands
- Resolved issue with newly created directories treated as files for
real-time data acquisitions
Release 0.9 (02/25/2016)
=============================
- Developed directory processing mode for uploads; in this mode file transfer
plugins transfer entire directories as opposed to individual files
- Added dm-get-processing-plugins command
- Resolved working directory issue that may occur with simutaneous uploads
Release 0.8 (01/26/2016)
=============================
- Enhanced upload/daq performance and functionality (hidden files are not
processed; for uploads system can detect files that had been processed
already; improved handling and reporting of processing errors)
- Source file checksum is calculated for rsync/gridftp plugins by default
- Added dm-stop-upload command
- Resolved globus online user authorization delay issue
Release 0.7 (12/08/2015)
=============================
- Introduced framework and user interfaces for tracking progress of file
uploads and data acquisitions in DAQ service
- Added ability to monitor multiple directories for the same experiment
simultaneously (required changes to DAQ service REST interfaces)
- Enhanced start/stop DAQ and upload commands to use DM_FILE_SERVER_URL
environment variable
- Added user interfaces and utilities that enable experiment data download
from machines that have SSH access to the storage host
Release 0.6 (11/6/2015)
=============================
- Added file system observer agent interface for DAQ service
- Implemented FTP file system observer for DAQ service
- Added interfaces for deleting user experiment role in DS service
- Introduced java REST API framework, and specific experiment DS service API
- Web Portal notifies DS service about experiment user modifications
Release 0.5 (10/08/2015)
=============================
- Implemented Single Sign-On solution for backend services
- Enabled user authentication via login file
- Added file stat (with checksum) interface in DS web service
- After adding user role to experiment via command line, user is also
added to experiment group (if one exists)
- Added rsync file transfer plugin with checksum and delete
Release 0.4 (09/21/2015)
=============================
- Number of minor modifications made in preparation for test deployment at
beamlines
Release 0.3 (07/22/2015)
=============================
- Developed initial version of Catalogging Web Service based on MongoDB
- Developed sample processing plugins: file metadata catalog, SDDS processing,
SGE job submission
Release 0.2 (06/30/2015)
=============================
- Implemented storage permission management and user group management
- Developed common file processing service plugin framework
Release 0.1 (04/21/2015)
=============================
- Functional web portal (user, experiment, and policy pages)
- Developed web service and its API/CLI frameworks
- Developed initial version of Data Storage Web Service
- Developed initial version of Data Acquisition Web Service;
- DAQ service can monitor file system on a detector node and subsequently
transfer data to storage
File added
"1-BM-B,C"
"1-ID-B,C,E"
"2-BM-A,B"
"2-ID-D"
"2-ID-E"
"3-ID-B,C,D"
"4-ID-C"
"4-ID-D"
"5-BM-C"
"5-BM-D"
"5-ID-B,C,D"
"6-BM-A,B"
"6-ID-B,C"
"6-ID-D"
"7-BM-B"
"7-ID-B,C,D"
"8-BM-B"
"8-ID-E"
"8-ID-I"
"9-BM-B,C"
"9-ID-B,C"
"10-BM-A,B"
"10-ID-B"
"11-BM-B"
"11-ID-B"
"11-ID-C"
"11-ID-D"
"12-BM-B"
"12-ID-B"
"12-ID-C,D"
"13-BM-C"
"13-BM-D"
"13-ID-C,D"
"13-ID-E"
"14-BM-C"
"14-ID-B"
"15-ID-B,C,D"
"16-BM-B"
"16-BM-D"
"16-ID-B"
"16-ID-D"
"17-BM-B"
"17-ID-B"
"18-ID-D"
"19-BM-D"
"19-ID-D"
"20-BM-B"
"20-ID-B,C"
"21-ID-D"
"21-ID-E"
"21-ID-F"
"21-ID-G"
"22-BM-D"
"22-ID-D"
"23-BM-B"
"23-ID-B"
"23-ID-D"
"24-ID-C"
"24-ID-E"
"26-ID-C"
"27-ID-B"
"29-ID-C,D"
"30-ID-B,C"
"31-ID-D"
"32-ID-B,C"
"33-BM-C"
"33-ID-D,E"
"34-ID-C"
"34-ID-E"
"35-ID-B,C,D,E"
This diff is collapsed.
# Demo environment consists of three linux VMs:
# - data acquisition (DAQ), data storage (DS), sge cluster (HPC) nodes
# - CentOS 6.6, 64-bit
# - no shared storage
# - DS node runs PostgreSQL database server, Web Portal, DS Web Service,
# CAT Web Service, MongoDB server
# - DAQ node runs DAQ Web Service
# - HPC node runs SGE cluster
# Machine Preparation
# ===================
# install dependencies (all machines)
yum install -y gcc libgcc expect zlib-devel openssl-devel openldap-devel subversion make sed gawk autoconf automake wget readline-devel
# Download globus RPM repo and install gridftp (both machines)
# http://toolkit.globus.org/ftppub/gt6/installers/repo/globus-toolkit-repo-latest.noarch.rpm
yum install globus-gridftp
# Disable requiredtty in /etc/sudoers
# Prepare gridftp server to use sshd (dmstorage machine)
globus-gridftp-server-enable-sshftp
# create system (dm) account on both machines, configure ssh-keys and
# authorized_keys files
# create several user accounts (dmstorage machine): dmuser1, dmuser2, dmuser3
# build and install epics base and SDDS/SDDSepics extensions under
# /opt/epics (dmstorage machine)
# build SDDS python under /opt/epics/extensions/src/SDDS/python/
# copy sdds.py into /opt/DM/support/python/linux-x86_64/lib/python2.7/
# copy /opt/epics/extensions/src/SDDS/python/O.linux-x86_64/sddsdatamodule.so
# into /opt/DM/support/python/linux-x86_64/lib/python2.7/lib-dynload/
# export /opt/DM to dmhpc node
# yum install nfs-util
# edit /etc/exports and add /opt/DM 192.168.100.8(rw,sync)
# exportfs -a
# restart nfs
# install sge on hpc machine, add dmstorage as submission node,
# copy /opt/sge to dmstorage
# configure /opt/DM area for software installation
mkdir -p /opt/DM
chown -R dm.dm /opt/DM
chmod 755 /opt/DM
# configure (or disable) firewall (both machines)
/etc/init.d/iptables stop
# DM Deployment: DS Machine
# =========================
# Log into dmstorage node and create local DM deployment directory
# in dm user home area
cd /opt/DM
ls -l
# Checkout code as release 0.2
svn co https://subversion.xray.aps.anl.gov/DataManagement/trunk dev
# Build support area
cd dev
make support
# Source setup
source setup.sh
# Create db
make db
# Configure Web Portal
# Note:
# - this needs to be done only during the first portal deployment,
# or after portal has been unconfigured explicitly
# - this step configures DB access
# - adds initial DM system user to the DB
make configure-web-portal
# Add few users
#dm-add-user --username dmuser1 --first-name Test --last-name User1
#dm-add-user --username dmuser2 --first-name Test --last-name User2
#dm-add-user --username dmuser3 --first-name Test --last-name User3
# Deploy Web Portal
# Note:
# - deploys portal war file into glassfish
# - after this step, users can access portal at
# https://dmstorage.svdev.net:8181/dm
make deploy-web-portal
# Deploy DS Web Service
# Note:
# - generates SSL certificates and configuration files
# - after this step, DS web service is accessible at port 22236
# - log files are under DM/var/log
# - configuration files are under DM/etc
# - user setup file is DM/etc/dm.setup.sh
# - service control script is under DM/dm-0.2/etc/init.d
make deploy-ds-web-service
# Check functionality. Open second terminal and log into dmstorage node
# as user sveseli
# Source setup file to get access to DM commands
source /opt/DM/etc/dm.setup.sh
# Get user list as administrator (dm) account
dm-get-users
# DM Deployment: DAQ Machine/HPC Machine
# ======================================
# Log into dmdaq node and create local DM deployment directory
# in dm user home area
cd /opt/DM
ls -l
# Checkout code as release 0.2
svn co https://subversion.xray.aps.anl.gov/DataManagement/trunk dev
# Build support area
# Note the following:
# - since demo machines are identical, we could simply copy support/dm code
# from the storage node; this is not necessarily the case in general
# - support area and DM code distribution can be shared between DAQ and DS
# nodes
# - support area on the daq node is much lighter (i.e., no need
# for glassfish, etc.)
cd dev
make support-daq
# Source setup
source setup.sh
# Deploy DAQ Web Service
# Note:
# - requires storage node to be installed
# - generates SSL certificates and configuration files
# - after this step, DAQ web service is accessible at port 33336
# - log files are under DM/var/log
# - configuration files are under DM/etc
# - user setup file is DM/etc/dm.setup.sh
make deploy-daq-web-service
This diff is collapsed.
This diff is collapsed.