Control System Infrastructure
Paper Title Page
MOMPL006 Automatic Deployment in a Control System Environment 126
MOPHA074   use link to see paper's listing under its alternate paper code  
 
  • M.G. Konrad, S. Beher, A.P. Lathrop, D.G. Maxwell, J.P.H. Ryan
    FRIB, East Lansing, Michigan, USA
 
  Funding: Work supported by the U.S. Department of Energy Office of Science under Cooperative Agreement DE-SC0000661
Development of many software projects at the Facility of Rare Isotope Beams (FRIB) follows an agile development approach. An important part of this practice is to make new software versions available to users frequently to meet their changing needs during commissioning and to get feedback from them in a timely manner. However, building, testing, packaging, and deploying software manually can be a time-consuming and error-prone process. We will present processes and tools used at FRIB to standardize and automate the required steps. We will also describe our experience upgrading control system computers to a new operating system version as well as to a new EPICS release.
 
poster icon Poster MOMPL006 [3.806 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-MOMPL006  
About • paper received ※ 03 October 2019       paper accepted ※ 09 October 2019       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOMPL009 Control System Virtualization at Karlsruhe Research Accelerator 143
MOPHA093   use link to see paper's listing under its alternate paper code  
 
  • W. Mexner, B. Aydt, E. Blomley, E. Bründermann, D. Hoffmann, A.-S. Müller, M. Schuh
    KIT, Eggenstein-Leopoldshafen, Germany
  • S. Marsching
    Aquenos GmbH, Baden-Baden, Germany
 
  With the deployment of a storage spaces direct hyper-converged cluster in 2018, the whole control system server and network infrastructure of the Karlsruhe Research Accelerator have been virtualized to improve the control system availability. The cluster with 6 Dell PowerEdge R740Xd servers with 1.152 GB RAM, 72 cores and 40 TByte hyperconverged storage operates in total 120 virtual machines. We will report on our experiences running EPICS IOCs and the industrial control system WinCC OA in this virtual environment.  
poster icon Poster MOMPL009 [0.608 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-MOMPL009  
About • paper received ※ 27 September 2019       paper accepted ※ 09 October 2019       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOPHA022 Implementation of ISO 50001 Energy Management System With the Advantage of Archive Viewer in NSRRC 239
 
  • C.S. Chen, W.S. Chan, Y.Y. Cheng, Y.F. Chiu, Y.-C. Chung, K.C. Kuo, M.T. Lee, Y.-C. Lin, C.Y. Liu, Z.-D. Tsai
    NSRRC, Hsinchu, Taiwan
 
  Due to the limited energy resources in Taiwan, energy conservation is always a big issue for everyone who lives in this country. According to the data from the related departments, nearly 98% of energy is imported from abroad for more than a decade. Despite the strong dependency on foreign fuel imports, the energy subsidy policy leads to a relatively low cost of energy for end users, while it is not reasonable. In order to resolve the energy resource shortage and pursue a more efficient energy use, the implementation of ISO 50001 energy management system is activated with the advantage of the Archive Viewer in NSRRC this year. The energy management system will build up a overall energy usage model and several energy performance indicators to help us achieve efficient energy usage.  
poster icon Poster MOPHA022 [0.842 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-MOPHA022  
About • paper received ※ 30 September 2019       paper accepted ※ 10 October 2019       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOPHA026 Development of an Online Diagnostic Toolkit for the UPC Control System 246
 
  • H.Z. Chen, Y.-S. Cheng, K.T. Hsu, K.H. Hu, C.Y. Liao, C.Y. Wu
    NSRRC, Hsinchu, Taiwan
 
  Most IOC (Input Output Controller) platforms and servers at the TPS control system have been connected to uninterruptible power supplies (UPS) to prevent short downtime of the mains electricity. To accomplish higher availability, it is necessary to maintain batteries and circuits for the UPS system periodically. Thus, an online diagnostic toolkit had to be developed to monitor the status of the UPS system and to notify which abnormal components should be replaced. One dedicated EPICS IOC has been implemented to communicate with each UPS device via SNMP. The PV states of the UPS system are published and archived and specific graphical applications are designed to show the existing control environment via EPICS CA (Channel Access). This paper reports the development of an online diagnostic toolkit for the UPS System.  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-MOPHA026  
About • paper received ※ 30 September 2019       paper accepted ※ 10 October 2019       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOPHA031 Software and Hardware Design for Controls Infrastructure at Sirius Light Source 263
 
  • J.G.R.S. Franco, C.F. Carneiro, E.P. Coelho, R.C. Ito, P.H. Nallin, R.W. Polli, A.R.D. Rodrigues, V. dos Santos Pereira
    LNLS, Campinas, Brazil
 
  Sirius is a 3 GeV synchrotron light source under construction in Brazil. Assembly of its accelerators began on March 2018, when the first parts of the linear accelerator were taken out of their boxes and installed. The booster synchrotron installation has already been completed and its subsystems are currently under commissioning, while assembly of storage ring components takes place in parallel. The Control System of Sirius accelerators, based on EPICS, plays an important role in the machine commissioning, and installations and improvements have been continuously achieved. This work describes all the IT infrastructure underlying the control system, hardware developments, software architecture, and support applications. Future plans are also presented.  
poster icon Poster MOPHA031 [32.887 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-MOPHA031  
About • paper received ※ 01 October 2019       paper accepted ※ 09 October 2019       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOPHA067 New Injection Information Archiver for SuperKEKB 370
 
  • H. Kaji
    KEK, Ibaraki, Japan
 
  We upgraded the Injection Archiver System of the SuperKEKB collider. It records the information related with the beam injection. The system is configured on the EPICS network. The database server employs Archiver Appliance as the database management system. In addition, the distributed shared memory is installed on the database server. Its memory area is synchronized with other nodes such as bunch current monitor via the optical connection. Therefore the database server can collect the data like bunch current at the RF-bucket which the beam pulse is injected. By using this dedicated optical network, we succeed the high-speed and stable data acquisition. The injection data can be recorded, pulse-by-pulse, in 50 Hz without any packet loss.  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-MOPHA067  
About • paper received ※ 03 October 2019       paper accepted ※ 23 October 2019       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOPHA080 Automatic Reconfiguration of CERN 18 kV Electrical Distribution - the Auto Transfer Control System 400
 
  • J.C. Letra Simoes, S. Infante, F.A. Marin
    CERN, Geneva, Switzerland
 
  Availability is key to electrical power distribution at CERN. The CERN electrical network has been consolidated over the last 15 years in order to cope with the evolving needs of the laboratory and now comprises a 200 MW supply from the French grid at 400 kV, a partial back up from the Swiss grid at 130 kV and 16 diesel generators. The Auto Transfer Control System has a critical role in minimizing the duration of power cuts on this complex electrical network, thus significantly reducing the impact of downtime on CERN accelerator operation. In the event of a major power loss, the control system analyzes the global status of the network and decides how to reconfigure the network from alternative sources, following predefined constraints and priorities. The Auto Transfer Control System is based on redundant logical controllers (PLC) with multiple remote IO stations linked via an Ethernet IP ring (over optical fiber) across the three major substations at CERN. This paper describes the system requirements, constraints and the applicable technologies, which will be used to deliver an operational system by 2020.  
poster icon Poster MOPHA080 [1.586 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-MOPHA080  
About • paper received ※ 26 September 2019       paper accepted ※ 10 October 2019       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOPHA085 CERN Controls Open Source Monitoring System 404
 
  • F. Locci, F. Ehm, L. Gallerani, J. Lauener, J.P. Palluel, R. Voirin
    CERN, Meyrin, Switzerland
 
  The CERN accelerator controls infrastructure spans several thousands of machines and devices used for Accelerator control and data acquisition. In 2009 a full home-made CERN solution has been developed (DIAMON) to monitor and diagnose the complete controls infrastructure. The adoption of the solution by an enlarged community of users and its rapid expansion led to a final product that became more difficult to operate and maintain, in particular because of the multiplicity and redundancy of the services, the centralized management of the data acquisition and visualization software, the complex configuration and also the intrinsic scalability limits. At the end 2017, a complete new monitoring system for the beam controls infrastructure was launched. The new "COSMOS" system was developed with two main objectives in mind: First, detect instabilities and prevent breakdowns of the control system infrastructure and to provide users with a more coherent and efficient solution for the development of their specific data monitoring agents and related dashboards. This paper describes the overall architecture of COSMOS, focusing on the conceptual and technological choices of the system.  
poster icon Poster MOPHA085 [1.475 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-MOPHA085  
About • paper received ※ 29 September 2019       paper accepted ※ 19 October 2019       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOPHA112 Improving Perfomance of the MTCA System by use of PCI Express Non-Transparent Bridging and Point-To-Point PCI Express Transactions 480
 
  • L.P. Petrosyan
    DESY, Hamburg, Germany
 
  The PCI Express Standard enables one of the highest data transfer rates today. However, with a large number of modules in a MTCA system and an increasing complexity of individual MTCA components along with a growing demand for high data transfer rates to client programs performance of the overall system becomes an important key parameter. Multiprocessor systems are known to provide not only the ability for higher processing bandwidth, but also allow greater system reliability through host failover mechanisms. The use of non-transparent bridges in PCI systems supporting intelligent adapters in enterprise and multiple processors in embedded systems is a well established technology. There the non-transparent bridge acts as a gateway between the local subsystem and the system backplane. This can be ported to the PCI Express standard by replacing one of the transparent switches on the PCI Express switch with a non-transparent switch. Our experience of establishing non-transparent bridging in MTCA systems will be presented.  
poster icon Poster MOPHA112 [0.452 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-MOPHA112  
About • paper received ※ 10 September 2019       paper accepted ※ 03 November 2019       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOPHA154 Data Acquisition System Deployment Using Docker Containers for the SMuRF Project 597
 
  • J.A. Vásquez
    SLAC, Menlo Park, California, USA
 
  The SLAC Microresonator Radio Frequency (SMuRF) system is being developed as a readout system for next generation Cosmic Microwave Background (CMB) cameras*. It is based on a FPGA board where the real-time digital processing algorithms are implemented, and high-level applications running in an industrial PC. The software for this project is based on C++ and Python and it is in active development. The software follows the client-server model where the server implements the low-level communication with the FGPA while high-level applications and data processing algorithms run on the client. SMuRF systems are being deployed in several institutions and in order to facilitate the management of the software application releases, dockers containers are being used. Docker images, for both servers and clients, contain all the software packages and configurations needed for their use. The images are tested, tagged, and published in one place. They can then be deployed in all other institutions in minutes with no extra dependencies. This paper describes how the docker images are designed and build, and how continuous integration tools are used in their release cycle for this project.
*arXiv:1809.03689 [astro-ph.IM]
 
poster icon Poster MOPHA154 [2.189 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-MOPHA154  
About • paper received ※ 27 September 2019       paper accepted ※ 10 October 2019       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOPHA160 Enabling Data Analytics as a Service for Large Scale Facilities 614
 
  • K. Woods, R.J. Clegg, N.S. Cook, R. Millward
    Tessella, Abingdon, United Kingdom
  • F. Barnsely, C. Jones
    STFC/RAL, Chilton, Didcot, Oxon, United Kingdom
 
  Funding: UK Research and Innovation - Science & Technology Facilities Council (UK SBS IT18160)
The Ada Lovelace Centre (ALC) at STFC is an integrated, cross-disciplinary data intensive science centre, for better exploitation of research carried out at large scale UK Facilities including the Diamond Light Source, the ISIS Neutron and Muon Facility, the Central Laser Facility and the Culham Centre for Fusion Energy. ALC will provide on-demand, data analysis, interpretation and analytics services to worldwide users of these research facilities. Using open-source components, ALC and Tessella have together created a software infrastructure to support the delivery of that vision. The infrastructure comprises a Virtual Machine Manager, for managing pools of VMs across distributed compute clusters; components for automated provisioning of data analytics environments across heterogeneous clouds; a Data Movement System, to efficiently transfer large datasets; a Kubernetes cluster to manage on demand submission of Spark jobs. In this paper, we discuss the challenges of creating an infrastructure to meet the differing analytics needs of multiple facilities and report the architecture and design of the infrastructure that enables Data Analytics as a Service.
 
poster icon Poster MOPHA160 [1.665 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-MOPHA160  
About • paper received ※ 30 September 2019       paper accepted ※ 10 October 2019       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUDPP01 A Monitoring System for the New ALICE O2 Farm 835
 
  • G. Vino, D. Elia
    INFN-Bari, Bari, Italy
  • V. Chibante Barroso, A. Wegrzynek
    CERN, Meyrin, Switzerland
 
  The ALICE Experiment has been designed to study the physics of strongly interacting matter with heavy-ion collisions at the CERN LHC. A major upgrade of the detector and computing model (O2, Offline-Online) is currently ongoing. The ALICE O2 farm will consist of almost 1000 nodes enabled to readout and process on-the-fly about 27 Tb/s of raw data. To increase the efficiency of computing farm operations a general-purpose near real-time monitoring system has been developed: it lays on features like high-performance, high-availability, modularity, and open source. The core component (Apache Kafka) ensures high throughput, data pipelines, and fault-tolerant services. Additional monitoring functionality is based on Telegraf as metric collector, Apache Spark for complex aggregation, InfluxDB as time-series database, and Grafana as visualization tool. A logging service based on Elasticsearch stack is also included. The designed system handles metrics coming from operating system, network, custom hardware, and in-house software. A prototype version is currently running at CERN and has been also successfully deployed by the ReCaS Datacenter at INFN Bari for both monitoring and logging.  
slides icon Slides TUDPP01 [1.128 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-TUDPP01  
About • paper received ※ 30 September 2019       paper accepted ※ 10 October 2019       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUDPP02 Data Acquisition System for the APS Upgrade 841
 
  • S. Veseli, N.D. Arnold, T.G. Berenc, J. Carwardine, G. Decker, T. Fors, T.J. Madden, G. Shen, S.E. Shoaf
    ANL, Lemont, Illinois, USA
 
  Funding: Argonne National Laboratory’s work was supported by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences, under contract DE-AC02-06CH11357
APS Upgrade multi-bend achromat accelerator (MBA) uses state-of-the-art embedded controllers coupled to various technical subsystems. These controllers have the capability to collect large amounts of fast data for statistics, diagnostics, or fault recording. At times, continuous real-time acquisition of this data is preferred, which presents a number of challenges that must be considered early on in the design; such as network architecture, data management and storage, real-time processing, and impact on normal operations. The design goal is selectable acquisition of turn-by-turn BPM data, together with additional fast diagnostics data. In this paper we discuss engineering specifications and the design of the MBA Data Acquisition System (DAQ). This system will interface with several technical subsystems to provide time-correlated and synchronously sampled data acquisition for commissioning, troubleshooting, performance monitoring and fault detection. Since most of these subsystems will be new designs for the MBA, defining the functionality and interfaces to the DAQ early in the development will ensure the necessary components are included in a consistent and systematic way.
 
slides icon Slides TUDPP02 [13.915 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-TUDPP02  
About • paper received ※ 30 September 2019       paper accepted ※ 09 October 2019       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUDPP03 Improvement of EPICS Software Deployment at NSLS-II 847
 
  • A.A. Derbenev
    BNL, Upton, New York, USA
 
  The NSLS-II Control System has workstations and servers standardized to the usage of Debian OS. With exceptions like RTEMS and Windows systems where software is built and delivered by hand, all hosts have EPICS software installed from an internally-hosted and externally-mirrored Debian package repository. Configured by Puppet, machines have a similar environment with EPICS base, modules, libraries, and binaries. The repository is populated from epicsdeb, a community organization on GitHub. Currently, packages are available for Debian 8 and 9 with legacy support being provided for Debian 6 and 7. Since packaging creates overhead on how quickly software updates can be available, keeping production systems on track with development is a challenging task. Software is often customized and built manually to get recent features, e.g. for AreaDetector. Another challenge is services like GPFS which underperform or do not work on Debian. Proposed improvements target keeping the production environment up to date. A detachment from the host OS is achieved by using containers, such a Docker, to provide software images. A CI/CD pipeline is created to build and distribute software updates.  
slides icon Slides TUDPP03 [0.710 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-TUDPP03  
About • paper received ※ 29 September 2019       paper accepted ※ 09 October 2019       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUDPP04 Data Acquisition and Virtualisation of the CLARA Controls System 852
 
  • R.F. Clarke, G. Cox, M.D. Hancock, P.W. Heath, S. Kinder, N. Knowles, B.G. Martlew, A. Oates, P.H. Owens, W. Smith, J.T.G. Wilson
    STFC/DL, Daresbury, Warrington, Cheshire, United Kingdom
  • S. Kinder
    DSoFt Solutions Ltd, Warrington, United Kingdom
 
  The CLARA experiment at the STFC, Daresbury laboratory has just completed its first successful exploitation period. The CLARA controls system is being rapidly deployed as CLARA enters its next development phase and our current infrastructure is becoming hard to maintain. Virtualization of the server infrastructure will allow the rapid deployment, recovery and testing of systems infrastructure. This talk will review our experience of migrating several key services and IOCs to a virtualized environment. KVM and LXD have been evaluated against our current system and Ansible has been used to automate many tasks that were normally done by hand. The Archiver Appliance is being exploited beyond its original deployment and is a critical component of several analysis tool-chains. Virtualization allows development, maintenance and deployment of the archiver without disrupting its users. Virtualization is also used to manage the CLARA Virtual Accelerator. The Virtual Accelerator can now run with many instances proving useful for scientists. Originally, it was limited to one instance per server.  
slides icon Slides TUDPP04 [0.945 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-TUDPP04  
About • paper received ※ 30 September 2019       paper accepted ※ 10 October 2019       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEAPP01 Old and New Generation Control Systems at ESA 859
 
  • M. Pecchioli
    ESA/ESOC, Darmstadt, Germany
 
  Traditionally Mission Control Systems for spacecraft operated at the European Space Operations Centre (ESOC) have been developed based on large re-use of a common implementation covering the majority of the required functions, which is referred to as mission control system infrastructure. The generation currently in operations has been successfully used for all categories of missions, including many commercial ones operated outside ESOC. It is however anticipated that its implementation is going to face obsolescence in the coming years, thus an ambitious Project is currently on-going aiming at the development and deployment of a completely new generation. This Project capitalizes as much as possible on the European initiative (referred to as EGS-CC) which is progressively developing and delivering a modern and advanced platform forming the basis for any type of monitoring and control applications for space systems. This paper is going to provide a technical overview of the two infrastructure generations, highlighting the main differences from a technical and usability standpoints. Lessons learned from previous and current developments will also be analyzed.  
slides icon Slides WEAPP01 [4.794 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-WEAPP01  
About • paper received ※ 26 September 2019       paper accepted ※ 09 October 2019       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEAPP02 Modernization Challenges for the IT Infrastructure at the National Ignition Facility 866
 
  • A.D. Casey, P. Adams, M.J. Christensen, E.P. Ghere, N.I. Spafford, M.R.V. Srirangapatanam, K.L. Tribbey, R. Vadlamani, K.S. White, D.P. Yee
    LLNL, Livermore, California, USA
 
  Funding: This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
As the National Ignition Facility (NIF) enters its second decade of full-scale operations, the demands on all aspects of the Information Technology (IT) infrastructure are becoming more varied, complex, and critical. Cybersecurity is an increasing focus area for the NIF IT team with the goal of securing the data center whilst providing the flexibility for developers to continue to access the sensitive areas of the controls system and the production tools. This must be done whilst supporting the interoperability of controls system elements executing on legacy bare metal hardware in an increasingly homogenized virtual environment in addition to responding to the user’s requests for ever-increasing storage needs and the introduction of cloud services. While addressing these evolutionary changes, the impact to continuous 24/7 Shot Operations must also be minimized. The challenges, strategies and implementation approaches being undertaken by the NIF IT team at the NIF to address the issues of infrastructure modernization will be presented.
 
slides icon Slides WEAPP02 [7.028 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-WEAPP02  
About • paper received ※ 02 October 2019       paper accepted ※ 09 October 2019       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEAPP03 Converting From NIS to Redhat Identity Management 871
 
  • T.S. McGuckin, R.J. Slominski
    JLab, Newport News, Virginia, USA
 
  Funding: This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of Nuclear Physics under contract DE-AC05-06OR23177.
The Jefferson Lab (JLab) accelerator controls network has transitioned to a new authentication and directory service infrastructure. The new system uses the Red Hat Identity Manager (IdM) as a single integrated front-end to the Lightweight Directory Access Protocol (LDAP) and a replacement for NIS and a stand-alone Kerberos authentication service. This system allows for integration of authentication across Unix and Windows environments and across different JLab computing environments, including across firewalled networks. The decision making process, conversion steps, issues and solutions will be discussed.
 
slides icon Slides WEAPP03 [3.898 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-WEAPP03  
About • paper received ※ 01 October 2019       paper accepted ※ 09 October 2019       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEAPP04 ICS Infrastructure Deployment Overview at ESS 875
 
  • B. Bertrand, S. Armanet, J. Christensson, A. Curri, A. Harrisson, R. Mudingay
    ESS, Lund, Sweden
 
  The ICS Control Infrastructure group at the European Spallation Source (ESS) is responsible for deploying many different services. We treat Infrastructure as code to deploy everything in a repeatable, reproducible and reliable way. We use three main tools to achieve that: Ansible (an IT automation tool), AWX (a GUI for Ansible) and CSEntry (a custom in-house developed web application used as Configuration Management Database). CSEntry (Control System Entry) is used to register any device with an IP address (network switch, physical machines, virtual machines). It allows us to use it as a dynamic inventory for Ansible. DHCP and DNS are automatically updated as soon as a new host is registered in CSEntry. This is done by triggering a task that calls an Ansible playbook via AWX API. Virtual machines can be created directly from CSEntry with one click, again by calling another Ansible playbook via AWX API. This playbook uses proxmox (our virtualization platform) API for the VM creation. By using Ansible groups, different proxmox clusters can be managed from the same CSEntry web application. Those tools give us an easy and flexible solution to deploy software in a reproducible way.  
slides icon Slides WEAPP04 [13.604 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-WEAPP04  
About • paper received ※ 30 September 2019       paper accepted ※ 10 October 2019       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEMPL002 Project Nheengatu: EPICS support for CompactRIO FPGA and LabVIEW-RT 997
WEPHA005   use link to see paper's listing under its alternate paper code  
 
  • D. Alnajjar, G.S. Fedel, J.R. Piton
    LNLS, Campinas, Brazil
 
  A novel solution for integrating EPICS with Compact RIO (cRIO), the real-time embedded industrial controllers by National Instruments (NI), is proposed under the name Nheengatu (NHE). The cRIO controller, which is equipped with a processor running a real-time version of Linux (LinuxRT) and a Xilinx Kintex FPGA, is extremely powerful for control systems since it can be used to program real-time complex data processing and fine control tasks on both the LinuxRT and the FPGA. The proposed solution enables the control and monitoring of all tasks running on LinuxRT and the FPGA through EPICS. The devised solution is not limited to any type of cRIO module. Its architecture can be abstracted into four groups: FPGA and LabVIEW-RT interface blocks, the Nheengatu library, Device Support and IOC. The Nheengatu library, device support and IOC are generic - they are compiled only once and can be deployed on all cRIOs available. Consequently, a setup-specific configuration file is provided to the IOC upon instantiation. The configuration file contains all data for the devised architecture to configure the FPGA and to enable communication between EPICS and the FPGA/LabVIEW-RT interface blocks.  
poster icon Poster WEMPL002 [0.565 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-WEMPL002  
About • paper received ※ 14 September 2019       paper accepted ※ 02 October 2020       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPHA019 MONARC: Supervising the Archiving Infrastructure of CERN Control Systems 1111
 
  • J-C. Tournier, E. Blanco Viñuela
    CERN, Geneva, Switzerland
 
  The CERN industrial control systems, using WinCC OA as SCADA (Supervisory Control and Data Acquisition), share a common history data archiving system relying on an Oracle infrastructure. It consists of 2 clusters of two nodes for a total of more than 250 schemas. Due to the large number of schemas and of the shared nature of the infrastructure, three basic needs arose: (1) monitor, i.e. get the inventory of all DB nodes and schemas along with their configurations such as the type of partitioning and their retention period; (2) control, i.e. parameterise each schema individually; and (3) supervise, i.e. have an overview of the health of the infrastructure and be notified of misbehaving schemas or database node. In this publication, we are presenting a way to monitor, control and supervise the data archiving system based on a classical SCADA system. The paper is organized in three parts: the first part presents the main functionalities of the application, while the second part digs into its architecture and implementation. The third part presents a set of use cases demonstrating the benefit of using the application.  
poster icon Poster WEPHA019 [2.556 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-WEPHA019  
About • paper received ※ 30 September 2019       paper accepted ※ 19 October 2019       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPHA056 Tango Controls Benchmarking Suite 1224
 
  • M. Liszcz, P.P. Goryl
    S2Innovation, Kraków, Poland
 
  Funding: Tango Community
Tango Controls is a client-server framework used to build distributed control systems. It is applied at small installations with few clients and servers as well as at large laboratories running hundreds of servers talking to thousands of devices with hundreds of concurrent client applications. A Tango Controls benchmarking suite has been developed. It allows testing of several features of Tango Controls for efficiency. The tool can be used to check the impact of new developments in the framework as well as the impact of specific network-server and deployment architecture implemented at a facility. The tool will be presented along with some benchmark results.
 
poster icon Poster WEPHA056 [1.497 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-WEPHA056  
About • paper received ※ 30 September 2019       paper accepted ※ 20 October 2019       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPHA057 Building a Data Analysis as a Service Portal 1228
 
  • A. Götz, A. Campbell
    ESRF, Grenoble, France
  • I. Andrian, G. Kourousias
    Elettra-Sincrotrone Trieste S.C.p.A., Basovizza, Italy
  • A. Camps, D. Salvat, D. Sanchez
    ALBA-CELLS Synchrotron, Cerdanyola del Vallès, Spain
  • M. van Daalen
    PSI, Villigen PSI, Switzerland
 
  Funding: This project has received funding from the European Union’s Horizon 2020 research and innovation program under grant agreement No 730872
As more and more scientific data are stored at photon sources there is a growing need to provide services to access to view, reduce and analyze the data remotely. The Calipsoplus* project, in which all photon sources in Europe are involved in, has recognized this need and created a prototype portal for Data Analysis as a Service. This paper will present the technology choices, the architecture of the blueprint, the prototype services and the objectives of the production version planned in the medium term. The paper will cover the challenges of building a portal from scratch which covers the needs of multiple sites, each with their own data catalogue, local computing infrastructure and different workflows. User authentication and management are essential to creating a useful but sustainable service.
*http://www.calipsoplus.eu/
 
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-WEPHA057  
About • paper received ※ 01 October 2019       paper accepted ※ 09 October 2019       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPHA104 Managing Cybersecurity for Control System Safety System development environments 1343
 
  • R. Mudingay, S. Armanet
    ESS, Lund, Sweden
 
  At ESS, we manage cyber security for our control system infrastructure by mixing together technologies that are relevant for each system. User access to the control system networks is controlled by an internal DMZ concept whereby we use standard security tools (vulnerability scanners, central logging, firewall policies, system and network monitoring), and users have to go through dedicated control points (reverse proxy, jump hosts, privileged access management solutions or EPICS channel or PV access gateways). The infrastructure is managed though a DevOps approach: describing each component using a configuration management solution; using version control to track changes, with continuous integration workflows to our development process; and constructing the deployment of the lab/staging area to mimic the production environment. We also believe in the flexibility of visualization. This is particularly true for safety systems where the development of safety-critical code requires a high level of isolation. To this end, we utilize dedicated virtualized infrastructure and isolated development environments to improve control (remote access, software update, safety code management).  
poster icon Poster WEPHA104 [0.840 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-WEPHA104  
About • paper received ※ 27 September 2019       paper accepted ※ 03 November 2019       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPHA112 Database Scheme for On-Demand Beam Route Switching Operations at SACLA/SPring-8 1352
 
  • K. Okada, N. Hosoda, T. Ohshima, T. Sugimoto, M. Yamaga
    JASRI, Hyogo, Japan
  • T. Fujiwara, T. Maruyama, T. Ohshima, T. Okada
    RIKEN SPring-8 Center, Hyogo, Japan
  • T. Fukui, N. Hosoda, H. Maesaka
    RIKEN SPring-8 Center, Innovative Light Sources Division, Hyogo, Japan
  • O. Morimoto, Y. Tajiri
    SES, Hyogo-pref., Japan
 
  At SACLA, the X-ray free electron laser (XFEL) facility, we have been operating the electron linac in time-sharing (equal duty) mode between beamlines. The next step is to vary the duty factor on an on-demand basis and to bring the beam into the SP8 storage ring. It is a part of a big picture of an upgrade*. The low-emittance beam is ideal for the next generation storage ring. In every 60 Hz repetition cycle, we have to deal a bunch of electrons properly. The challenge here is we must keep the beam quality for the XFEL demands while responding occasional injection requests from the storage ring**. This paper describes the database system that supports both SACLA/SP8 operations. The system is a combination of RDB and NoSQL databases. In the on-demand beam switching operation, the RDB part keeps the parameters to define sequences, which include a set of one-second route patterns, and a bucket sequence for the injection, etc. As for data analysis, it is going to be a post-process to build an event for a certain route, because not all equipment get the route command in real time. We present the preparation status toward the standard operation for beamline users.
*http://rsc.riken.jp/pdf/SPring-8-II.pdf
**IPAC2019 proceedings
 
poster icon Poster WEPHA112 [0.561 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-WEPHA112  
About • paper received ※ 01 October 2019       paper accepted ※ 09 October 2019       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPHA133 Sirius Diagnostics IOC Deployment Strategy 1407
 
  • L.M. Russo
    LNLS, Campinas, Brazil
 
  Sirius beam diagnostics group is responsible for specifying, designing and developing IOCs for most of the diagnostics in the Booster, Storage Ring and Transport Lines, such as: Screens, Slits, Scrapers, Beam Position Monitors, Tune Measurement, Beam Profile, Current Measurement, Injection Efficiency and Bunch-by-Bunch Feedback. In order to ease maintenance, improve robustness, repeatability and dependency isolation a set of guidelines and recipes were developed for standardizing the IOC deployment. It is based on two main components: containerization, which isolates the IOC in a well-known environment, and a remote boot strategy for our diagnostics servers, which ensures all hosts boot in the same base operating system image. In this paper, the remote boot strategy, along with its constituent parts, as well as the containerization guidelines will be discussed.  
poster icon Poster WEPHA133 [1.213 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-WEPHA133  
About • paper received ※ 29 September 2019       paper accepted ※ 09 October 2019       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPHA134 Monitoring System for IT Infrastructure and EPICS Control System at SuperKEKB 1413
 
  • S. Sasaki, T.T. Nakamura
    KEK, Ibaraki, Japan
  • M. Hirose
    KIS, Ibaraki, Japan
 
  The monitoring system has been deployed to efficiently monitor IT infrastructure and EPICS control system at SuperKEKB. The system monitors two types of data: metrics and logs. Metrics such as network traffic and CPU usage are monitored with Zabbix. In addition, we developed an EPICS Channel Access client application that sends PV values to Zabbix server and the status of each IOC is monitored with it. The archived data in Zabbix are visualized on Grafana, which allows us to easily create dashboards and analyze the data. Logs such as text data are monitored with the Elastic Stack, which lets us collect, search, analyze and visualize logs. We apply it to monitor broadcast packets in the control network and the frequency of Channel Access search for each PV. Moreover, a Grafana plugin is developed to visualize the data from pvAccess RPC servers and various data such as CSS alarm status data can be displayed on it.  
poster icon Poster WEPHA134 [0.732 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-WEPHA134  
About • paper received ※ 30 September 2019       paper accepted ※ 09 October 2019       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPHA151 A Very Lightweight Process Variable Server 1449
 
  • A. Sukhanov, J.P. Jamilkowski
    BNL, Upton, New York, USA
 
  Funding: Work supported by Brookhaven Science Associates, LLC under Contract No. DE-SC0012704 with the U.S. Department of Energy.
Modern instruments are often supplied with rich proprietary software tools, which makes it difficult to integrate them to an existing control systems. The liteServer is very lightweight, low latency, cross-platform network protocol for signal monitoring and control. It provides very basic functionality of popular channel access protocols like CA or pvAccess of EPICS. It supports request-reply patterns: ’info’, ’get’ and ’set’ requests and publish-subscribe pattern: ’monitor’ request. The main scope of the liteServer is: 1) provide control and monitoring for instruments supplied with proprietary software, 2) provide fastest possible Ethernet transactions, 3) make it possible to implement in FPGA without CPU core. The transport protocol is connection-less (UDP) and data serialization format is Universal Binary JSON (UBJSON). The UBJSON provides complete compatibility with the JSON specification, it is very efficient and fast. A liteServer-based system can be connected to existing control system using simple bridge program (bridges for EPICS and RHIC Ado are provided).
 
poster icon Poster WEPHA151 [0.383 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-WEPHA151  
About • paper received ※ 30 September 2019       paper accepted ※ 10 October 2019       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WESH2001 CS-Studio Alarm System Based on Kafka 1504
WEPHA077   use link to see paper's listing under its alternate paper code  
 
  • K.-U. Kasemir
    ORNL, Oak Ridge, Tennessee, USA
 
  Funding: This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences, under contract number DE-AC05-00OR22725.
The CS-Studio alarm system was originally based on a relational database and the Apache ActiveMQ message service. The former was necessary to store configuration and state, while the latter communicated state updates and user actions. In a recent update, the combination of relational database and ActiveMQ have been replaced by Apache Kafka. We present how this simplified the implementation while at the same time improving performance.
 
poster icon Poster WESH2001 [1.938 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-WESH2001  
About • paper received ※ 26 September 2019       paper accepted ※ 09 October 2019       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WESH2002 EPICS pva Access Control at ESS 1509
WEPHA160   use link to see paper's listing under its alternate paper code  
 
  • G. Weiss
    ESS, Lund, Sweden
 
  At the European Spallation Source, PV Access has been selected as the default EPICS protocol. However, PV Access in the initial releases of EPICS 7 does not implement any access control of client requests. In order to be able to protect selected process variables (PVs) from write requests that may cause harm to the system, some type of access control is needed. This paper details how PV Access is extended to partially reuse the access control available in Channel Access, while at the same time providing additional features. It also explains how ESS intends to deploy and manage access control in terms of infrastructure, tools and responsibilities. Limitations of the access control mechanism are also discussed.  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-WESH2002  
About • paper received ※ 01 October 2019       paper accepted ※ 23 October 2019       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)