Paper | Title | Page |
---|---|---|
MOMPR006 | Performance of the ALICE Luminosity Leveling Software Architecture in the Pb-Pb Physics Run | 167 |
MOPHA150 | use link to see paper's listing under its alternate paper code | |
|
||
Luminosity leveling is performed in the ALICE experiment of the Large Hadron Collider (LHC) in order to limit the event pile-up probability, and ensure a safe operation for the detectors. It will be even more important during Run 3 when 50 KHz Pb ion-Pb ion (Pb-Pb) collisions will be delivered in IP2. On the ALICE side, it is handled by the ALICE-LHC Interface project, which also ensures an online data exchange between ALICE and the LHC. An automated luminosity leveling algorithm was developed for the proton-proton physics run, and was also deployed for the Pb-Pb run with some minor changes following experience gained. The algorithm is implemented in the SIMATIC WinCC SCADA environment, and determines the leveling step from measured beam parameters received from the LHC, and the luminosity recorded by ALICE. In this paper, the software architecture of the luminosity leveling software is presented, and the performance achieved during the Pb-Pb run and Van der Meer scans is discussed. | ||
![]() |
Poster MOMPR006 [3.292 MB] | |
DOI • | reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-MOMPR006 | |
About • | paper received ※ 30 September 2019 paper accepted ※ 10 October 2019 issue date ※ 30 August 2020 | |
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | |
MOMPR007 | Scalable High Demand Analytics Environments with Heterogeneous Clouds | 171 |
MOPHA161 | use link to see paper's listing under its alternate paper code | |
|
||
Funding: UK Research and Innovation - Science & Technology Facilities Council (UK SBS IT18160) The Ada Lovelace Centre (ALC) at STFC provides on-demand, data analysis, interpretation and analytics services to scientists using UK research facilities. ALC and Tessella have built software systems to scale analysis environments to handle peaks and troughs in demand as well as to reduce latency by provision environments closer to scientists around the world. The systems can automatically provision infrastructure and supporting systems within compute resources around the world and in different cloud types (including commercial providers). The system then uses analytics to dynamically provision and configure virtual machines in various locations ahead of demand so that users experience as little delay as possible. In this poster, we report on the architecture and complex software engineering used to automatically scale analysis environments to heterogeneous clouds, make them secure and easy to use. We then discuss how analytics was used to create intelligent systems in order to allow a relatively small team to focus on innovation rather than operations. |
||
![]() |
Poster MOMPR007 [1.650 MB] | |
DOI • | reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-MOMPR007 | |
About • | paper received ※ 30 September 2019 paper accepted ※ 09 October 2019 issue date ※ 30 August 2020 | |
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | |
MOPHA043 | Accelerator Control Data Mining with WEKA | 293 |
|
||
Funding: Work supported by Brookhaven Science Associates, LLC under Contract No. DE-SC0012704 with the U.S. Department of Energy. Accelerator control systems generates and stores many time-series data related to the performance of an accelerator and its support systems. Many of these time series data have detectable change trends and patterns. Being able to timely detect and recognize these data change trends and patterns, analyse and predict the future data changes can provide intelligent ways to improve the controls system with proactive feedback/forward actions. With the help of advanced data mining and machine learning technology, these types of analyses become easier to produce. As machine learning technology matures with the inclusion of powerful model algorithms, data processing tools, and visualization libraries in different programming languages (e.g. Python, R, Java, etc), it becomes relatively easy for developers to learn and apply machine learning technology to online accelerator control system data. This paper explores time series data analysis and forecasting in the Relativistic Heavy Ion Collider (RHIC) control systems with the Waikato Environment for Knowledge Analysis (WEKA) system and its Java data mining APIs. |
||
DOI • | reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-MOPHA043 | |
About • | paper received ※ 20 September 2019 paper accepted ※ 08 October 2019 issue date ※ 30 August 2020 | |
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | |
WEMPL001 | An Application of Machine Learning for the Analysis of Temperature Rise on the Production Target in Hadron Experimental Facility at J-PARC | 992 |
WEPHA003 | use link to see paper's listing under its alternate paper code | |
|
||
Hadron Experimental Facility (HEF) is designed to handle an intense slow-extraction proton beam from the 30 GeV Main Ring (MR) of Japan Proton Accelerator Research Complex (J-PARC). Proton beams of 5·1013 protons per spill during 2 seconds in the 5.2 seconds accelerator operating cycle were extracted from MR to HEF in the 2018 run. In order to evaluate soundness of the target, we have analyzed variation of temperature rise on the production target, which depends on the beam conditions on the target. Predicted temperature rise is calculated from the existing data of the beam intensity, the spill length (duration of the beam extraction) and the beam position on the target, using a linear regression analysis with a machine learning library, Scikit-learn. As a result, the predicted temperature rise on the production target shows good agreement with the measured one. We have also examined whether the present method of the predicted temperature rise from the existing data can be applied to unknown data in the future runs. The present paper reports the status of the measurement system of temperature rise on the target with machine learning in detail. | ||
DOI • | reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-WEMPL001 | |
About • | paper received ※ 28 September 2019 paper accepted ※ 10 October 2019 issue date ※ 30 August 2020 | |
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | |
WEMPR010 | Anomaly Detection for CERN Beam Transfer Installations Using Machine Learning | 1066 |
WEPHA155 | use link to see paper's listing under its alternate paper code | |
|
||
Reliability, availability and maintainability determine whether or not a large-scale accelerator system can be operated in a sustainable, cost-effective manner. Beam transfer equipment (e.g. kicker magnets) has potentially significant impact on the global performance of a machine complex. Identifying root causes of malfunctions is currently tedious, and will become infeasible in future systems due to increasing complexity. Machine Learning could automate this process. For this purpose a collaboration between CERN and KU Leuven was established. We present an anomaly detection pipeline which includes preprocessing, detection, postprocessing and evaluation. Merging data of different, asynchronous sources is one of the main challenges. Currently, Gaussian Mixture Models and Isolation Forests are used as unsupervised detectors. To validate, we compare to manual e-logbook entries, which constitute a noisy ground truth. A grid search allows for hyper-parameter optimization across the entire pipeline. Lastly, we incorporate expert knowledge by means of semi-supervised clustering with COBRAS. | ||
DOI • | reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-WEMPR010 | |
About • | paper received ※ 30 September 2019 paper accepted ※ 09 October 2019 issue date ※ 30 August 2020 | |
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | |
WEPHA025 | Initial Implementation of a Machine Learning System for SRF Cavity Fault Classification at CEBAF | 1131 |
|
||
Funding: Authored by Jefferson Science Associates, LLC under U.S. DOE Contract No. DE-AC05-06OR23177 The Continuous Electron Beam Accelerator Facility (CEBAF) at Jefferson Laboratory is a high power Continuous Wave (CW) electron accelerator. It uses a mixture of of SRF cryomodules: older, lower energy C20/C50 modules and newer, higher energy C100 modules. The cryomodules are arrayed in two anti-parallel linear accelerators. Accurately classifying the type of cavity faults is essential to maintaining and improving accelerator performance. Each C100 cryomodule contains eight 7-cell cavities. When a cavity fault occurs within a cryomodule, all eight cavities generate 17 waveforms each containing 8192 points. This data is exported from the control system and saved for review. Analysis of these waveforms is time intensive and requires a subject matter expert (SME). SMEs examine the data from each event and label it according to one of several known cavity fault types. Multiple machine learning models have been developed on this labeled dataset with sufficient performance to warrant the creation of a limited machine learning software system for use by accelerator operations staff. This paper discusses the transition from model development to implementation of a prototype system. |
||
DOI • | reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-WEPHA025 | |
About • | paper received ※ 30 September 2019 paper accepted ※ 09 October 2019 issue date ※ 30 August 2020 | |
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | |
WEPHA121 | Deep Neural Network for Anomaly Detection in Accelerators | 1375 |
|
||
The main goal of NSRC SOLARIS is to provide scientific community with high quality synchrotron light. In order to do this it is essential to monitor subsystems that are responsible for beam stability. In this paper a deep neural network for anomaly detection in time series data is proposed. Base model is a pre-trained, 19-layer convolutional neural network VGG-19. Its task is to identify abnormal status of sensors in certain time step. Each time window is a square matrix so can be treated as an image. Any kind of anomalies in synchrotron’s subsystems may lead to beam loss, affect experiments and in extreme cases can cause damage of the infrastructure, therefore when anomaly is detected operator should receive a warning about possible instability. | ||
![]() |
Poster WEPHA121 [1.368 MB] | |
DOI • | reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-WEPHA121 | |
About • | paper received ※ 29 September 2019 paper accepted ※ 09 October 2019 issue date ※ 30 August 2020 | |
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | |
WEPHA163 | NXCALS - Architecture and Challenges of the Next CERN Accelerator Logging Service | 1465 |
|
||
CERN’s Accelerator Logging Service (CALS) is in production since 2003 and stores data from accelerator infrastructure and beam observation devices. Initially expecting 1 TB/year, the Oracle based system has scaled to cope with 2.5 TB/day coming from >2.3 million signals. It serves >1000 users making an average of 5 million extraction requests per day. Nevertheless, with a large data increase during LHC Run 2 the CALS system began to show its limits, particularly for supporting data analytics. In 2016 the NXCALS project was launched with the aim of replacing CALS from Run 3 onwards, with a scalable system using "Big Data" technologies. The NXCALS core is production-ready, based on open-source technologies such as Hadoop, HBase, Spark and Kafka. This paper will describe the NXCALS architecture and design choices, together with challenges faced while adopting these technologies. This includes: write/read performance when dealing with vast amounts of data from heterogenous data sources with strict latency requirements; how to extract, transform and load >1 PB of data from CALS to NXCALS. NXCALS is not CERN-specific and can be relevant to other institutes facing similar challenges. | ||
![]() |
Poster WEPHA163 [1.689 MB] | |
DOI • | reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-WEPHA163 | |
About • | paper received ※ 29 September 2019 paper accepted ※ 09 October 2019 issue date ※ 30 August 2020 | |
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | |
WEPHA164 | CAFlux: A New EPICS Channel Archiver System | 1470 |
|
||
We post a new EPICS channel archiver system that is being developed at LANSCE of Los Alamos National Laboratory. Different from the legacy archiver system, this system is built on InfluxDB database and Plotly visualization toolkits. InfluxDB is an opensource time series database system and provides a SQL-like language for fast storage and retrieval of time series data. By replacing the old archiving engine and index file with InfluxDB, we have a more robust, compact and stable archiving server. On a client side, we introduce a new implementation combined with asynchronous programming and multithreaded programming. We also describe a web-based archiver configuration system that is associated with our current IRMIS system. To visualize the data stored, we use the JavaScript Plotly graphing library, another open source toolkit for time series data, to build frontend pages. In addition, we also develop a viewer application with more functionality including basic data statistics and simple arithmetic for channel values. Finally, we propose some ideas to integrate more statistical analysis into this system. | ||
![]() |
Poster WEPHA164 [0.697 MB] | |
DOI • | reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-WEPHA164 | |
About • | paper received ※ 27 September 2019 paper accepted ※ 20 October 2019 issue date ※ 30 August 2020 | |
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | |
THCPL01 |
Report From the 2nd ICFA Mini-Workshop on Machine Learning for Charged Particle Accelerators | |
|
||
The goal of this workshop series is to build a world-wide community of researchers interested in applying machine learning techniques to particle accelerators. Machine Learning for accelerator science will include accelerator physicists, computer and controls scientists. I will give a summary on the discussed machine learning topics: facility needs, prognostics, optimization, and beam dynamics. | ||
![]() |
Slides THCPL01 [24.252 MB] | |
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | |
THCPL02 |
Evolution of Machine Learning for NIF Optics Inspection | |
|
||
The National Ignition Facility (NIF) is the most energetic laser in the world where scientists from around the world conduct experiments supporting fields like astrophysics, materials science, nuclear science, and using fusion as a clean, safe energy source. In doing so, the NIF routinely operates above the damage threshold for its optics. To extend optic lifetimes, we developed a recycle loop during which each damage site on an optic is tracked through time, protected when it approaches an optic-specific size limit, and then repaired so the optic can be reused. Here we describe an overview of custom image analysis, machine learning, and deep learning methods used throughout the recycle loop for optics inspection on the NIF beamlines and off. Most recently, we helped automate the optic repair process by identifying microscopic damage before its repair and then evaluating the repaired site for quality control. Since 2007 we’ve used machine learning to improve accuracy and automate tedious processes to enable and inform an efficient optics recycle loop.
This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. |
||
![]() |
Slides THCPL02 [51.219 MB] | |
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | |
THCPL03 |
Machine Learning for Beam Size Stabilization at the Advanced Light Source | |
|
||
Funding: This research is funded by US Department of Energy (BES & ASCR Programs), and supported by the Director of the Office of Science of the US Department of Energy under Contract No. DEAC02-05CH11231. Synchrotron beam size stability is a necessity in producing reliable, repeatable, and novel experiments at bright light source facilities such as the Advanced Light Source (ALS). As both brightness and coherence are set to increase drastically through upgrades at such facilities, current methods to ensure beam size stabilization will soon reach their limit. Current beam size stability is on the order of several microns (few percent) and is achieved by a combination of feedbacks, physical models, and feed-forward look-up tables to counteract lattice imperfections and optics perturbations arising from varying insertion device gaps and phases. In this work we highlight our first attempts to implement machine learning to stabilize the beam size at the ALS. The use of neural networks allows for beam size stabilization not dependent on physical models, but instead using insertion device movement as training input. Such a correction model can be continuously retrained via online methods. This method results in beam size stabilization as low as 0.2 microns rms, an order of magnitude lower than current stabilization methods. |
||
![]() |
Slides THCPL03 [3.388 MB] | |
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | |
THCPL04 | SCIBORG: Analyzing and Monitoring LMJ Facility Health and Performance Indicators | 1597 |
|
||
The Laser MegaJoule (LMJ) is a 176-beam laser facility, located at the CEA CESTA laboratory near Bordeaux (France). It is designed to deliver about 1.4 MJ of energy to targets, for high energy density physics experiments, including fusion experiments. It operates, since June 2018, 5 of the 22 bundles expected in the final configuration. Monitoring system health and performance of such a facility is essential to maintain high operational availability. SCIBORG is the first step of a larger software that will collect in one tool all the facility parameters. Nowadays SCIBORG imports experiment setup and results, alignment and PAM* control command parameters. It is designed to perform data analysis (temporal/crossed) and implements monitoring features (dashboard). This paper gives a first user feedback and the milestones for the full spectrum system.
*PreAmplifier Module |
||
![]() |
Slides THCPL04 [4.882 MB] | |
DOI • | reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-THCPL04 | |
About • | paper received ※ 01 October 2019 paper accepted ※ 08 October 2019 issue date ※ 30 August 2020 | |
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | |
THCPL05 | Signal Analysis for Automated Diagnostic Applied to LHC Cryogenics | 1601 |
|
||
The operation of the LHC at CERN is highly dependent on its associated infrastructure to operate properly, such as its cryogenic system where many conditions must be fulfilled for superconducting magnets and RF cavities. In 2018, the LHC cryogenic system caused 172 hours of accelerator downtime (out of 5760 running hours). Since the cryogenics recovery acts as a time amplifier, it is important to identify not optimized processes and malfunctioning systems at an early stage to anticipate losses of availability. The LHC cryogenic control systems embeds about 60,000 I/O whereof more than 20,000 analog signals which have to be monitored by operators. It is therefore crucial to select only the relevant and necessary information to be presented. This paper presents a signal analysis system created to automatically generate adequate daily reports on potential problems in the LHC cryogenic system which are not covered by conventional alarms, and examples of real issues that have been found and treated during the 2018 physics run. The analysis system, which is written in Python, is generic and can be applied to many different systems. | ||
![]() |
Slides THCPL05 [1.781 MB] | |
DOI • | reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-THCPL05 | |
About • | paper received ※ 30 September 2019 paper accepted ※ 10 October 2019 issue date ※ 30 August 2020 | |
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | |
THCPL06 | Introducing Big Data Analysis in a Proton Therapy Facility to Reduce Technical Downtime | 1608 |
|
||
At the center for Proton Therapy of the Paul Scherrer Institute about 450 cancer patients are treated yearly using accelerated protons in three treatment areas. The facility is active since 1984 and for each patient we keep detailed log files containing machine measurements during each fraction of the treatment, which we analyze daily to guarantee dose and position values within the prescribed tolerances. Furthermore, each control and safety system generates textual log files as well as periodic measurements such as pressure, temperature, beam intensity, magnetic fields or reaction time of components. This adds up currently to approximately 5 GB per day. Downtime of the facility is both inconvenient for patients and staff, as well as financially relevant. This article describes how we have extended our data analysis strategies using machine archived parameters and online measurements to understand interdependencies, to perform preventive maintenance of ageing components and to optimize processes. We have chosen Python to interface, structure and analyze the different data sources in an standardized manner. The online channels have been accessed via an EPICS archiver. | ||
![]() |
Slides THCPL06 [7.028 MB] | |
DOI • | reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-THCPL06 | |
About • | paper received ※ 30 September 2019 paper accepted ※ 09 October 2019 issue date ※ 30 August 2020 | |
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | |
THCPL07 | Experience Using NuPIC to Detect Anomalies in Controls Data | 1612 |
|
||
NuPIC (Numenta Platform for Intelligent Computing) is an open-source computing platform that attempts to mimic neurological pathways in the human brain. We have used the Python implementation to explore the utility of using this system to detect anomalies in both stored and real-time data coming from the controls system for the RHIC Collider at Brookhaven National Laboratory. This paper explores various aspects of that work including the types of data most suited to anomaly detection, the likelihood of developing false positive and negative anomaly results, and experiences with training the system. We also report on the use of this software for monitoring various parts of the controls system in real-time. | ||
![]() |
Slides THCPL07 [11.115 MB] | |
DOI • | reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-THCPL07 | |
About • | paper received ※ 02 October 2019 paper accepted ※ 09 October 2019 issue date ※ 30 August 2020 | |
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | |