Journal of Simulation Engineering https://jsime.org/index.php/jsimeng <p>The peer-reviewed and peer-published <em>Journal of Simulation Engineering (JSimE)</em> is dedicated to the <em>accessible</em> dissemination of research activity in the broad area of simulation engineering to the global <em>modeling and simulation</em> community by providing <em>True Open Access</em>.</p> <p><em>True Open Access</em> means that the journal is not only accessible to readers, but also to authors who do not have to pay excessive publication charges. It also means that research results are not just reported in PDF files with limited interactivity and accessibility, but are available in the more accessible HTML format , which allows improved text retrieval, analysis and indexing, in-document linking, semantic annotations, multi-media contents and interactivity.</p> <p>The focus of the journal is on concepts, theories and techniques for the design, implementation and validation of simulations, simulators and simulation frameworks, including requirements analysis, modeling languages, model engineering, user interface design and development methodologies (like agile or model-driven).</p> Consortium for True Open Access in Modeling and Simulation en-US Journal of Simulation Engineering 2569-9466 <p>All manuscripts published in JSimE is licensed under a <a href="https://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International (CC BY 4.0)</a>. Submission of a manuscript to JSimE assumes the acceptance of this license.</p> A Gamified Synthetic Environment for Evaluation of Counter-Disinformation Solutions https://jsime.org/index.php/jsimeng/article/view/13 <p>This paper presents a simulation-based approach to countering online dis/misinformation. This disruptive technology experiment incorporated a synthetic environment component, based on adapted SIR epidemiological model to evaluate and visualize the effectiveness of suggested solutions to the issue. The participants in the simulation were given a realistic scenario depicting a dis/misinformation threat and were asked to select a number of solutions, described in IoS (Ideas-of-Systems) cards. During the event, the qualitative and quantitative characteristics of the IoS cards, were tested in a synthetic environment (SEN), built after a Susceptible-Infected-Resistant (SIR) model. The participants, divided into teams, presented and justified their dis/misinformation strategy which included three IoS card selections. A jury of subject matter experts, announced the winning team, based on the merits of the proposed strategies and the compatibility of the different cards, grouped together.</p> Jesse Richman Lora Pitman Girish Nandakumar Copyright (c) 2022 Lora Pitman, Girish Nandakumar, Jesse Richman http://creativecommons.org/licenses/by/4.0 2022-04-07 2022-04-07 Improving Delivery Performance of Construction Manufacturing Using Machine Learning https://jsime.org/index.php/jsimeng/article/view/19 <p>This paper is concerned with the development, testing, and optimization of a machine learning method for controlling the production of precast reinforced concrete components. A discussion is given identifying the unique challenges associated with achieving production efficiency in the construction industry, namely: uncertain and sporadic demand for work; high customization of the design of components; a need to produce work to order; and little prospect for stockpiling work. This is followed by a review of the methods available to tackle this problem, which can be divided into search-based techniques (such as heuristics) and experience-based techniques (such as artificial neural networks). A model of an actual factory for producing precast reinforced concrete components is then described, to be used in the development and testing of the controller. A reinforcement learning strategy is proposed for training a deep artificial neural network to act as the control policy for this factory. The ability of this policy to learn is evaluated, and its performance is compared to that of a rule-of-thumb and a random policy for a series of testing production runs. The reinforcement learning method developed an effective and reliable policy that significantly outperformed the rule-of-thumb and random policies. An additional series of experiments were undertaken to further optimize the performance of the method, ranging the number of input variables presented to the policy. The paper concludes with an indication of proposed future research designed to further improve performance and to extend the scope of application of the method.</p> Ian Flood Xiaoyan Zhou Copyright (c) 2023 Ian Flood, Xiaoyan Zhou http://creativecommons.org/licenses/by/4.0 2023-06-09 2023-06-09