Data Management
Paper Title Page
MOPHA063 Towards a Common Reliability & Availability Information System for Particle Accelerator Facilities 356
 
  • K. Höppner, Th. Haberer, K. Pasic, A. Peters
    HIT, Heidelberg, Germany
  • J. Gutleber, A. Niemi
    CERN, Meyrin, Switzerland
  • H. Humer
    AIT, Vienna, Austria
 
  Funding: This project has received funding from the European Union’s Horizon 2020 Research and Innovation program under grant agreement No 730871.
Failure event and maintenance record based data collection systems have a long tradition in industry. Today, the particle accelerator community does not possess a common platform that permits storing and sharing reliability and availability information in an efficient way. In large accelerator facilities used for fundamental physics research, each machine is unique, the scientific culture, work organization, and management structures are often incompatible with a streamlined industrial approach. Other accelerator facilities enter the area of industrial process improvement, like medical accelerators due to legal requirements and constraints. The Heidelberg Ion Beam Therapy Center is building up a system for reliability and availability analysis, exploring the technical and organizational requirements for a community-wide information system on accelerator system and component reliability and availability. This initiative is part of the EU H2020 project ARIES, started in May 2017. We will present the technical scope of the system that is supposed to access and obtain information specific to reliability statistics in ways not compromising the information suppliers and system producers.
 
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-MOPHA063  
About • paper received ※ 04 October 2019       paper accepted ※ 08 October 2019       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOPHA071 Integrated Multi-Purpose Tool for Data Processing and Analysis via EPICS PV Access 379
 
  • J.H. Kim, H.S. Kim, Y.M. Kim, H.-J. Kwon, Y.G. Song
    Korea Atomic Energy Research Institute (KAERI), Gyeongbuk, Republic of Korea
 
  Funding: This work has been supported through KOMAC (Korea Multi-purpose Accelerator Complex) operation fund of KAERI by MSIT (Ministry of Science and ICT)
At the KOMAC, we have been operating a proton linac, consists of an ion source, low energy beam transport, a radio frequency quadrupole and eleven drift tube linacs for 100 MeV. The beam that users require is transported to the five target rooms using linac control system based on EPICS framework. In order to offering stable beam condition, it is important to figure out characteristic of a 100 MeV proton linac. Then the beam diagnosis systems such as beam current monitoring system, beam phase monitoring system and beam position monitoring system are installed on linac. All the data from diagnosis systems are monitored using control system studio for user interface and are archived through archive appliance. Operators analyze data after experiment for linac characteristic or some events are happened. So data scanning and processing tools are required to manage and analysis the linac more efficiently. In this paper, we describe implementation for the integrated data processing and analysis tools based on data access.
 
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-MOPHA071  
About • paper received ※ 30 September 2019       paper accepted ※ 02 October 2020       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOPHA086 The Design of Experimental Performance Analysis and Visualization System 409
 
  • J. Luo, L. Li, Z. Ni, X. Zhou
    CAEP, Sichuan, People’s Republic of China
  • Y. Gao
    Stony Brook University, Stony Brook, New York, USA
 
  The analysis of experimental performance is an essential task to any experiment. With the increasing demand on experimental data mining and utilization. methods of experimental data analysis abound, including visualization, multi-dimensional performance evaluation, experimental process modeling, performance prediction, to name but a few. We design and develop an experimental performance analysis and visualization system, consisting of data source configuration component, algorithm management component, and data visualization component. It provides us feasibilities such as experimental data extraction and transformation, algorithm flexible configuration and validation, and multi-views presentation of experimental performance. It will bring great convenience and improvement for the analysis and verification of experimental performance.  
poster icon Poster MOPHA086 [0.232 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-MOPHA086  
About • paper received ※ 30 September 2019       paper accepted ※ 10 October 2019       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOPHA092 Prototyping the Resource Manager and Central Control System for the Cherenkov Telescope Array 426
 
  • D. Melkumyan, I. Sadeh, T. Schmidt, P.A. Wegner
    DESY Zeuthen, Zeuthen, Germany
  • M. Fuessling, I. Oya
    CTA, Heidelberg, Germany
  • S. Sah, M. Sekoranja
    Cosylab, Ljubljana, Slovenia
  • U. Schwanke
    Humboldt University Berlin, Institut für Physik, Berlin, Germany
  • J. Schwarz
    INAF-Osservatorio Astronomico di Brera, Merate, Italy
 
  The Cherenkov Telescope Array (CTA) will be the next generation ground-based observatory for gamma-ray astronomy at very-high energies. CTA will consist of two large arrays with 118 Cherenkov telescopes in total, deployed in Paranal (Chile) and Roque de Los Muchachos Observatories (Canary Islands, Spain). The Array Control and Data Acquisition (ACADA) system provides the means to execute observations and to handle the acquisition of scientific data in CTA. The Resource Manager & Central Control (RM&CC) sub-system is a core element in the ACADA system. It implements the execution of observation requests received from the scheduler sub-system and provides infrastructure services concerning the administration of various resources to all ACADA sub-systems. The RM&CC is also responsible of the dynamic allocation and management of concurrent operations of up to nine telescope sub-arrays, which are logical groupings of individual CTA telescopes performing coordinated scientific operations. This contribution presents a summary of the main RM&CC design features, and of the future plans for prototyping.  
poster icon Poster MOPHA092 [1.595 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-MOPHA092  
About • paper received ※ 18 September 2019       paper accepted ※ 09 October 2019       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOPHA117 Big Data Archiving From Oracle to Hadoop 497
 
  • I. Prieto Barreiro, M. Sobieszek
    CERN, Meyrin, Switzerland
 
  The CERN Accelerator Logging Service (CALS) is used to persist data of around 2 million predefined signals coming from heterogeneous sources such as the electricity infrastructure, industrial controls like cryogenics and vacuum, or beam related data. This old Oracle based logging system will be phased out at the end of the LHC’s Long Shut-down 2 (LS2) and will be replaced by the Next CERN Accelerator Logging Service (NXCALS) which is based on Hadoop. As a consequence, the different data sources must be adapted to persist the data in the new logging system. This paper describes the solution implemented to archive into NXCALS the data produced by QPS (Quench Protection System) and SCADAR (Supervisory Control And Data Acquisition Relational database) systems, which generate a total of around 175, 000 values per second. To cope with such a volume of data the new service has to be extremely robust, scalable and fail-safe with guaranteed data delivery and no data loss. The paper also explains how to recover from different failure scenarios like e.g. network disruption and how to manage and monitor this highly distributed service.  
poster icon Poster MOPHA117 [1.227 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-MOPHA117  
About • paper received ※ 29 September 2019       paper accepted ※ 10 October 2019       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOPHA157 Global Information Management System for HEPS 606
 
  • C.H. Wang, C.P. Chu
    IHEP, Beijing, People’s Republic of China
  • H.H. Lv
    SINAP, Shanghai, People’s Republic of China
 
  HEPS is a big complex science facility which consists of the accelerator, the beam lines and general facilities. The accelerator is made up of many subsystem and a large number of components such as magnets, power supply, high frequency and vacuum equipment, etc. Variety of components and equipment with cables are distributed installation with distance to each other. These components during the stage of the design and construction and commissioning will produce tens of thousands of data. The information collection and storage and management for so much data for a large scientific device is particularly important. This paper describes the HEPS database design and application from the construction and installation and put into operations generated by the uniqueness of huge amounts of data, in order to fully improve the availability and stability of the accelerator, and experiment stations, and further improve the overall performance.  
poster icon Poster MOPHA157 [0.756 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-MOPHA157  
About • paper received ※ 29 September 2019       paper accepted ※ 10 October 2019       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOSH1001 Current Status of KURAMA-II 641
MOPHA140   use link to see paper's listing under its alternate paper code  
 
  • M. Tanigaki
    Kyoto University, Research Reactor Institute, Osaka, Japan
 
  KURAMA-II, a successor of a carborne gamma-ray survey system named KURAMA (Kyoto University RAdiation MApping system), has become one of the major systems for the activities related to the nuclear accident at TEPCO Fukushima Daiichi Nuclear Power Plant in 2011. The development of KURAMA-II is still on the way to extend its application areas beyond specialists. One of such activities is the development of cloud services for serving an easy management environment for data management and interactions with existing radiation monitoring schemes. Another trial is to port the system to a single-board computer for serving KURAMA-II as a tool for the prompt establishment of radiation monitoring in a nuclear accident. In this paper, the current status of KURAMA-II on its developments and applications along with some results from its applications are introduced.  
slides icon Slides MOSH1001 [94.239 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-MOSH1001  
About • paper received ※ 01 October 2019       paper accepted ※ 09 October 2019       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUBPL01 Automatic Web Application Generation From an Irradiation Experiment Data Management Ontology (IEDM) 687
 
  • B. Gkotse, F. Ravotti
    CERN, Meyrin, Switzerland
  • B. Gkotse, P. Jouvelot
    MINES ParisTech, PSL Research University, Paris, France
 
  Funding: This project has received funding from the European Union’s Horizon 2020 Research and Innovation program under Grant Agreement no. 654168
Detectors and electronic components in High-Energy Physics experiments are nowadays often exposed to harsh radiation environments. Thus, to insure reliable operation over time, their radiation tolerance must be assessed beforehand through dedicated testing experiments in irradiation facilities. To prevent data loss and perform accurate experiments, these facilities need to rely upon a proper data management system. In prior work, we provided a formal description of the key concepts involved in the data management of irradiation experiments using an ontology (IEDM)*. In this work, we show how this formalisation effort has a practical by-product via the introduction of an ontology-based methodology for the automatic generation of web applications, using IEDM as a use case. Moreover, we also compare this IEDM-generated web application to the IRRAD Data Manager (IDM), the manually developed web application used for the data handling of the CERN Proton Irradiation facility (IRRAD). Our approach should allow irradiation facility teams to gain access to state-of-the-art data management tools without incurring significant software development effort.
*Gkotse, B., Jouvelot, P., Ravotti, F.: IEDM: An Ontology for Irradiation Experiments Data Management. In: Extended Semantic Web Conference 2019, accepted in Posters and Demos. http://cern.ch/iedm
 
slides icon Slides TUBPL01 [10.183 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-TUBPL01  
About • paper received ※ 30 September 2019       paper accepted ※ 21 October 2019       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUBPL02 Enabling Open Science for Photon and Neutron Sources 694
 
  • A. Götz, J. Bodera Sempere, A. Campbell, A. De Maria Antolinos, R.D. Dimper, J. Kieffer, V.A. Solé, T. Vincet
    ESRF, Grenoble, France
  • M. Bertelsen, T. Holm Rod, T.S. Richter, J.W. Taylor
    ESS, Copenhagen, Denmark
  • N. Carboni
    CERIC-ERIC, Trieste, Italy
  • S. Caunt, J. Hall, J.F. Perrin
    ILL, Grenoble, France
  • J.C. E, H. Fangohr, C. Fortmann-Grote, T.A. Kluyver, R. Rosca
    EuXFEL, Schenefeld, Germany
  • F.M. Gliksohn
    ELI-DC, Brussels, Belgium
  • R. Pugliese
    Elettra-Sincrotrone Trieste S.C.p.A., Basovizza, Italy
  • L. Schrettner
    ELI-ALPS, Szeged, Hungary
 
  Funding: This project has received funding from the European Union’s Horizon 2020 research and innovation program under grant agreement No 823852
Photon and Neutron sources are producing more and more petabytes of scientific data each year. At the same time scientific publishing is evolving to make scientific data part of publications. The Photon and Neutron Open Science Cloud (PaNOSC*) project is a EU financed project to provide scientific data management for enabling Open Science. Data will be managed according to the FAIR principles. This means data will be curated and made available under an Open Data policy, findable, interoperable and reusable. This paper will describe how the European photon and neutron sources on the ESFRI** roadmap envision PaNOSC as part of the European Open Science Cloud***. The paper will address the issues of data policy, metadata, data curation, long term archiving and data sharing in the context of the latest developments in these areas.
*https://panosc.eu
**https://www.esfri.eu/
**https://ec.europa.eu/research/openscience/index.cfm?pg=open-science-cloud
 
slides icon Slides TUBPL02 [14.942 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-TUBPL02  
About • paper received ※ 30 September 2019       paper accepted ※ 09 October 2019       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUBPL03 Experimental Data Transfer System BENTEN at SPring-8 702
 
  • T. Matsumoto, Y. Furukawa, Y. Hiraoka, M. Kodera, T. Matsushita, K. Nakada, A. Yamashita, S. Yokota
    JASRI, Hyogo, Japan
 
  Recently, there are strong demands on open data to promote data science like material informatics. At SPring-8, we have been operated data transfer system for open data of XAFS measurements since 2013* with the second in the world for amount data**. However, it was difficult to satisfy demands such as generic uses in experimental stations and data federation with other facilities. To overcome these, we newly developed data transfer system BENTEN. BENTEN provides easy-to-use and unified interface with REST API for data access from both inside and outside SPring-8. At SPring-8, proposal number is assigned for each experiment and members in the proposal are defined in DB. BENTEN can also realize restricted data access with the members using authentication and the DB. Data registration was performed with metadata such as experimental conditions and samples. Various metadata in the experiments can be easily defined. To achieve flexible data access with full-text search, we used Elasticsearch as metadata store. We began operation of BENTEN and open access of XAFS data since March this year. We plan to utilize BENTEN to promote open data and data science also with other experimental data.
*H. Sakai et al., Proc. of ICALEPCS 2013, p.577-579
**K. Asakura et al., J. Synchrotron Rad. (2018), 25, p.967-971
 
slides icon Slides TUBPL03 [5.165 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-TUBPL03  
About • paper received ※ 28 September 2019       paper accepted ※ 10 October 2019       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUBPL04 Public Cloud-based Remote Access Infrastructure for Neutron Scattering Experiments at MLF, J-PARC 707
 
  • K. Moriyama
    CROSS, Ibaraki, Japan
  • T. Nakatani
    JAEA/J-PARC, Tokai-mura, Japan
 
  An infrastructure for remote access supporting research workflow is essential for neutron scattering user facilities such as J-PARC MLF. Because the experimental period spans day and night, service monitoring the measurement status from outside the facility is required. Additionally, convenient way to bring a large amount of data back to user’s home institution and to analyze it after experiments is required. To meet these requirements, we are developing a remote access infrastructure as a front-end for facility users based on public clouds. Recently, public clouds have been rapidly developed, so that development and operation schemes of computer systems have changed considerably. Various architectures provided by public clouds enable advanced systems to develop quickly and effectively. Our cloud-based infrastructure comprises services for experimental monitoring, data download and data analysis, using architectures, such as object storage, event-driven server-less computing, and virtual desktop infrastructure (VDI). Facility users can access this infrastructure using a web browser and a VDI client. This contribution reports the current status of the remote access infrastructure.  
slides icon Slides TUBPL04 [6.858 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-TUBPL04  
About • paper received ※ 30 September 2019       paper accepted ※ 09 October 2019       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUBPL05 RecSyncETCD: A Fault-tolerant Service for EPICS PV Configuration Data 714
 
  • T. Ashwarya, E.T. Berryman, M.G. Konrad
    FRIB, East Lansing, Michigan, USA
 
  Funding: Work supported by the U.S. Department of Energy Office of Science under Cooperative Agreement DESC0000661
RecCaster is an EPICS module which is responsible for uploading Process Variables (PVs) metadata from the IOC database to a central server called RecCeiver. The RecCeiver service is a custom-built application that passes this data on to the ChannelFinder, a REST-based search service. Together, RecCaster and RecCeiver form the building blocks of RecSync. RecCeiver is not a distributed service which makes it challenging to ensure high availability and fault-tolerance to its clients. We have implemented a new version of RecCaster which uploads the PV metadata to ETCD. ETCD is a commercial off-the-shelf distributed key-value store intended for high availability data storage and retrieval. It provides fault-tolerance as the service can be replicated on multiple servers to keep data consistently replicated. ETCD is a drop-in replacement for the existing RecCeiver to provide data storage and retrieval for PV metadata. Also, ETCD has a well-documented interface for client operations including the ability to live-watch the PV metadata for its clients. This paper discusses the design and implementation of RecSyncETCD as a fault-tolerant service for storing and retrieving EPICS PV metadata.
 
slides icon Slides TUBPL05 [1.099 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-TUBPL05  
About • paper received ※ 26 September 2019       paper accepted ※ 02 October 2020       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUBPL06 Energy Consumption Monitoring With Graph Databases and Service Oriented Architecture 719
 
  • A. Kiourkos, S. Infante, K.S. Seintaridis
    CERN, Meyrin, Switzerland
 
  CERN is a major electricity consumer. In 2018 it consumed 1.25 TWh, 1/3 the consumption of Geneva. Monitoring of this consumption is crucial for operational reasons but also for raising awareness of the users regarding energy utilization. This monitoring is done via a system, developed internally and is quite popular within the CERN community therefore to accommodate the increasing requirements, a migration is underway that utilizes the latest technologies for data modeling and processing. We present the architecture of the new energy monitoring system with an emphasis on the data modeling, versioning and the use of graphs to store and process the model of the electrical network for the energy calculations. The algorithms that are used are presented and a comparison with the existing system is performed in order to demonstrate the performance improvements and flexibility of the new approach. The system embraces the Service Oriented Architecture principles and we illustrate how these have been applied in its design. The different modules and future possibilities are also presented with an analysis of their strengths, weaknesses, and integration within the CERN infrastructure.  
slides icon Slides TUBPL06 [3.018 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-TUBPL06  
About • paper received ※ 29 September 2019       paper accepted ※ 09 October 2019       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEMPR001 Data Analysis Infrastructure for Diamond Light Source Macromolecular & Chemical Crystallography and Beyond 1031
WEPHA094   use link to see paper's listing under its alternate paper code  
 
  • M. Gerstel, A. Ashton, R.J. Gildea, K. Levik, G. Winter
    DLS, Oxfordshire, United Kingdom
 
  The Diamond Light Source data analysis infrastructure, Zocalo, is built on a messaging framework. Analysis tasks are processed by a scalable pool of workers running on cluster nodes. Results can be written to a common file system, sent to another worker for further downstream processing and/or streamed to a LIMS. Zocalo allows increased parallelization of computationally expensive tasks and makes the use of computational resources more efficient. The infrastructure is low-latency, fault-tolerant, and allows for highly dynamic data processing. Moving away from static workflows expressed in shell scripts we can easily re-trigger processing tasks in the event that an issue is found. It allows users to re-run tasks with additional input and ensures that automatically and manually triggered processing results are treated equally. Zocalo was originally conceived to cope with the additional demand on infrastructure by the introduction of Eiger detectors with up to 18 Mpixels and running at up to 560 Hz framerate on single crystal diffraction beamlines. We are now adapting Zocalo to manage processing tasks for ptychography, tomography, cryo-EM, and serial crystallography workloads.  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-WEMPR001  
About • paper received ※ 30 September 2019       paper accepted ※ 10 October 2019       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEMPR005 The Array Control and Data Acquisition System of the Cherenkov Telescope Array 1046
WEPHA117   use link to see paper's listing under its alternate paper code  
 
  • I. Oya, E. Antolini, M. Fuessling
    CTA, Heidelberg, Germany
  • L. Baroncelli, A. Bulgarelli, V. Conforti, N. Parmiggiani
    INAF, Bologna, Italy
  • J. Borkowski
    CAMK, Torun, Poland
  • A. Carosi, J.N. Jacquemier, G. Maurin
    IN2P3-LAPP, Annecy-le-Vieux, France
  • J. Colome
    CSIC-IEEC, Bellaterra, Spain
  • C. Hoischen
    Universität Potsdam, Potsdam-Golm, Germany
  • E. Lyard, R. Walter
    University of Geneva, Geneva, Switzerland
  • D. Melkumyan, K. Mosshammer, I. Sadeh, T. Schmidt, P.A. Wegner
    DESY Zeuthen, Zeuthen, Germany
  • U. Schwanke
    Humboldt University Berlin, Institut für Physik, Berlin, Germany
  • J. Schwarz
    INAF-Osservatorio Astronomico di Brera, Merate, Italy
  • G. Tosti
    Università degli di Perugia, Perugia, Italy
 
  The Cherenkov Telescope Array (CTA) project is the initiative to build the next-generation gamma-ray observatory. With more than 100 telescopes planned to be deployed in two sites, CTA is one of the largest astronomical facilities under construction. The Array Control and Data Acquisition (ACADA) system will be the central element of on-site CTA Observatory operations. The mission of the ACADA system is to manage and optimize the telescope array operations at each of the CTA sites. To that end, ACADA will provide all necessary means for the efficient execution of observations, and for the handling of the several Gb/s generated by each individual CTA telescope. The ACADA system will contain a real-time analysis pipeline, dedicated to the automatic generation of science alert candidates based on the inspection of data being acquired. These science alerts, together with external alerts arriving from other scientific installations, will permit ACADA to modify ongoing observations at sub-minute timescales in order to study high-impact scientific transient phenomena. This contribution describes the challenges, architecture, design principles, and development status of the ACADA system.  
poster icon Poster WEMPR005 [3.851 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-WEMPR005  
About • paper received ※ 30 September 2019       paper accepted ※ 09 October 2019       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPHA014 EPICS Archiver Appliance - Installation and Use at BESSY/HZB 1093
 
  • T. Birke
    HZB, Berlin, Germany
 
  After 2 years of tests and development, the EPICS Archiver Appliance went into operation at HZB/BESSY in April 2018. After running for a year as an optional new archiver, the Archiver Appliance switched places with the old Channel Archiver and is now the central productive archiver in currently three installations (four at the time of this conference) at HZB. To provide a smooth transition from the Channel Archiver to the EPICS Archiver Appliance for end users as well as applications, some frontends like e.g. the ArchiveViewer and other applications needed some modifications to be fully usable. New retrieval frontends are also provided and will replace the ArchiveViewer in the future. In addition the versatile retrieval API rapidly improved the development of Python applications for analysis and optimization. Experiences with installation, configuration, maintenance and use of the EPICS Archiver Appliance will be shared in this paper.  
poster icon Poster WEPHA014 [9.140 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-WEPHA014  
About • paper received ※ 29 September 2019       paper accepted ※ 19 October 2019       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPHA047 Cable Database at ESS 1199
 
  • R.N. Fernandes, S.R. Gysin, J.A. Persson, S. Regnell
    ESS, Lund, Sweden
  • L.J.G. Johansson
    OTIF, Malmö, Sweden
  • S. Sah
    Cosylab, Ljubljana, Slovenia
  • M. Salmič
    COSYLAB, Control System Laboratory, Ljubljana, Slovenia
 
  When completed, the European Spallation Source (ESS) will have around half a million of installed cables to power and control both the machine and end-stations instruments. To keep track of all these cables throughout the different phases of ESS, an application called Cable Database was developed at the Integrated Control System (ICS) Division. It provides a web-based graphical interface where authorized users may perform CRUD operations in cables, as well as batch imports (through well-defined EXCEL files) to substantially shortened the time needed to deal with massive amounts of cables at once. Besides cables, the Cable Database manages cable types, connectors, manufacturers and routing points, thus fully handling the information that surrounds cables. Additionally, it provides a programmatic interface through RESTful services that other ICS applications (e.g. CCDB) may consume to successfully perform their domain specific businesses. The present paper introduces the Cable Database and describes its features, architecture and technology stack, data concepts and interfaces. Finally, it enumerates development directions that could be pursued to further improve this application.  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-WEPHA047  
About • paper received ※ 30 September 2019       paper accepted ※ 09 October 2019       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPHA095 Managing Archiver Rules for Individual EPICS PVs in FRIB’s Diagnostics System 1312
 
  • B.S. Martins, S. Cogan, S.M. Lidia, D.O. Omitto
    FRIB, East Lansing, Michigan, USA
 
  Funding: This material is based upon work supported by the U.S. Department of Energy Office of Science under Cooperative Agreement DE-SC0000661, the State of Michigan, and Michigan State University.
The Beam Instrumentation and Measurements group at the Facility for Rare Isotope Beams is responsible for maintaining several EPICS IOC instances for beam diagnostics, of different IOC types, which end up generating tens of thousands of PVs. Given the heterogeneity of Diagnostics devices, the need to archive data for scientific and debugging purposes, and space limitations for archived data storage, there is a need for having per-PV (as opposed to per-Record) archiving rules in order to maximize utility and minimize storage footprint. This work will present our solution to the problem: "IOC Manager", a custom tool that leverages continuous integration, a relational database, and a custom EPICS module to allow users to specify regular-expression based rules for the archiver in a web interface.
 
poster icon Poster WEPHA095 [0.212 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-WEPHA095  
About • paper received ※ 30 September 2019       paper accepted ※ 20 October 2019       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPHA161 Revisiting the Bunch-Synchronized Data Acquisition System for the European XFEL Accelerator 1460
 
  • T. Wilksen, A. Aghababyan, L. Fröhlich, O. Hensler, R. Kammering, K. Rehlich, V. Rybnikov
    DESY, Hamburg, Germany
 
  After about two years in operation the bunch-synchronized data acquisition as used with the accelerator control system at the European XFEL is being revisited and reevaluated. As we have now gained quite some experience with the current system design it was found to have shortfalls specifically with respect to the offered methods for data retrieval and management. In the context of modern data collection and management technologies readily in use by huge internet companies, new frameworks are being evaluated as a control-system independent replacement for data reduction, processing and online analysis. The main focus here is currently put on streaming technologies. Different approaches are being discussed in this paper and reviewed for feasibility and adaptability for control system architectures used at DESY’s accelerator facilities.  
poster icon Poster WEPHA161 [2.687 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-WEPHA161  
About • paper received ※ 27 September 2019       paper accepted ※ 20 October 2019       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPHA166 Development of Web-based Parameter Management System for SHINE 1478
 
  • H.H. Lv
    SINAP, Shanghai, People’s Republic of China
  • C.P. Chu
    IHEP, Beijing, People’s Republic of China
  • Y.B. Leng, Y.B. Yan
    SSRF, Shanghai, People’s Republic of China
 
  A web-based parameter management system for Shanghai High repetition rate XFEL aNd Extreme light facility (SHINE) is developed for accelerator physicists and researchers to communicate with each other and track the modified history. The system is based on standard J2EE Glassfish platform with MySQL database utilized as backend data storage. The user interface is designed with JavaServer Faces which incorporates MVC architecture. It is of great convenience for researchers in the facility designing process.  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-WEPHA166  
About • paper received ※ 12 September 2019       paper accepted ※ 09 October 2019       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)