How is simulation software used
Mining companies can significantly cut costs by optimizing asset usage and knowing their future equipment needs. In logistics, a realistic picture can be produced using simulation, including unpredictable data, such as shipment lead times. The Use of Simulation with an Example: Simulation Modeling for Efficient Customer Service This specific example may also be applicable to the more general problem of human and technical resource management, where companies naturally seek to lower the cost of underutilized resources, technical experts, or equipment, for example.
Firstly, for the bank, the level of service was defined as the average queue size. Relevant system measures were then selected to set the parameters of the simulation model - the number and frequency of customer arrivals, the time a teller takes to attend a customer, and the natural variations which can occur in all of these, in particular, lunch hour rushes and complex requests.
A flowchart corresponding to the structure and processes of the department was then created. Simulation models only need to consider those factors which impact the problem being analyzed. For example, the availability of office services for corporate accounts, or the credit department have no effect on those for individuals, because they are physically and functionally separate.
This website uses cookies to ensure you get the best experience on our website. Learn more Got it! What is Simulation? The Purpose of Simulation We frequently look into the future of mankind and see dangers Looking into the future may be one of the reasons that brains evolved in the first place.
Richard Dawkins The underlying purpose of simulation is to shed light on the underlying mechanisms that control the behavior of a system. Addressing Risk and Uncertainty Using Probabilistic Simulation Our knowledge of the way things work, in society or nature, comes trailing clouds of vagueness. Vast ills have followed a belief in certainty. Kenneth Arrow Nobel Laureate, Economics, Although simulation can be a valuable tool for better understanding the underlying mechanisms that control the behavior of a system, using simulation to make predictions of the future behavior of a system can be difficult.
Deterministic Simulation Many simulation tools and approaches are deterministic. Probabilistic Simulation It is possible, however, to quantitatively represent uncertainties in simulations. Monte Carlo Simulation In order to compute the probability distribution of predicted performance, it is necessary to propagate translate the input uncertainties into uncertainties in the results. The ability to define what may happen in the future and to choose among alternatives lies at the heart of contemporary societies.
Peter Bernstein, Against the Gods: The Remarkable Story of Risk Simulation should be used when the consequences of a proposed action, plan or design cannot be directly and immediately observed i.
Types of Simulation Tools Because simulation is such a powerful tool to assist in understanding complex systems and to support decision-making, a wide variety of approaches and tools exist. Spreadsheets Perhaps the simplest and most broadly used general purpose simulator is the spreadsheet. Discrete Event Simulators These tools rely on a transaction-flow approach to modeling systems. Agent-Based Simulators This is a special class of discrete event simulator in which the mobile entities are known as agents.
Continuous Simulators This class of tools solves differential equations that describe the evolution of a system using continuous equations. Hybrid Simulators These tools combine the features of continuous simulators and discrete simulators.
Learn More Read a guide to selecting a simulation tool that best meets your needs Learn more about GoldSim. GoldSim 14 has been released! Nowadays, the economy requires a fast and flexible reaction to the market. Customer demands become more and more dynamic and unpredictable. It is hard to tell how your logistics system reacts to future changes. You might want to know what the best solution is for now, but also for the future.
This is hard to predict with static calculations, because systems have a lot of dependences, that are not static. The only tool that can analyze and improve these complex and dynamic systems, is simulation. Simulation is the imitation of the operation of a real-world process or system over time.
The act of simulating something first requires that a model be developed; this model represents the key characteristics or behaviors of the selected physical or abstract system or process. The model represents the system itself, whereas the simulation represents the operation of the system over time. The purpose of a simulation is a crucial factor in validation.
For some purposes, the simulation only needs to be weakly predictive, such as being able to rank scenarios by their stress on a system, rather than to predict actual performance. For other purposes, a simulation needs to be strongly predictive. Experience should help indicate, over time, which purposes require what degree and what type of predictive accuracy. Models and simulations are often written in a general form so that they will have wide applicability for a variety of related systems.
An example is a missile fly-out model, which might be used for a variety of missile systems. A model that has been used previously is often referred to as a legacy model. In an effort to reduce the costs of simulation, legacy models are sometimes used to represent new systems, based on a complete validation for a similar system. Done to avoid costly development of a de novo simulation, this use of a legacy model presents validation challenges. In particular, new systems by definition have new features.
Thus, a legacy model should not be used for a new application unless: a strong argument can be made about the similarity of the applications and an external validation with the new system is conducted.
A working presumption should be that the simulation will not be useful for the new application unless proven otherwise. Modeling and simulation may have their greatest contribution to operational test through improving operational test design. Modeling and simulation were used to help plan the operational test for the Longbow Apache see Appendix B.
Constructive simulation models can play at least four key roles. First, simulation models that properly incorporate both the estimated heterogeneity of system performance as a function of various characteristics of test scenarios , as well as the size of the remaining unexplained component of the variability of system performance, can be used to help determine the error probabilities of any significance tests used in assessing system effectiveness or suitability.
To do this, simulated relationships based on the various hypotheses of interest between measures of performance and environmental and other scenario characteristics can be programmed, along with the description of the number and characteristics of the test scenarios, and the results tabulated as in an operational.
Such replications can be repeated, keeping track of the percentage of tests that the system passed. This approach could be a valuable tool in computing error probabilities or operating test characteristics for non-standard significance tests.
Second, simulation models can help select scenarios for testing. Simulation models can assist in understanding which factors need controlling and which can be safely ignored in deciding which scenarios to choose for testing, and they can help to identify appropriate levels of factors. They can also be used to choose scenarios that would maximally discriminate between a new system and a baseline system.
This use requires a simulation model for the baseline system, which presumably would have been archived. For tests for which the objective is to determine system performance in the most stressful scenario s , a simulation model can help select the most stressful scenario s. As a feedback tool, assuming that information is to be collected from other than the most stressful scenarios, the ranking of the scenarios with respect to performance from the simulation model can be compared with that from the operational test, thereby providing feedback into the model-building process, to help validate the model and to discover areas in which it is deficient.
Third, there may be an advantage in using simulation models as a living repository of information collected about a system's operational performance.
This repository could be used for test planning and also to chart progress towards development, since each important measure of performance or effectiveness would have a target value from the Operational Requirements Document, along with the values estimated at any time, using either early operational assessments or, for requirements that did not have a strong operational aspect, the results from developmental testing.
Fourth, every instance in which a simulation model is used to design an operational test, and the test is then carried out, presents an opportunity for model validation.
The assumptions used in the simulation model can then be checked against test experience. Such an analysis will improve the simulation model under question, a necessary step if the simulation model is to be used in further operational tests or to assess the performance of the system as a baseline when the next innovation is introduced.
Feedback of this type will also help provide general experience to model developers as to which approaches work and which do not.
Of course, this kind of feedback will not be possible without the data archive recommended in Chapter 3. Also mentioned in Chapters 3 , 6 , and 8 , inclusion of field use data in such an archive provides great opportunities for validation of methods used in operational test design. The results of such tests, in turn, should be used to calibrate and validate all relevant models and simulations. The repository would include use of data from all relevant sources of information, including experience with similar systems, developmental testing, early operational assessments, operational testing, training exercises, and field use.
A final note is that validation for test design, although necessary, does not need to be as comprehensive as validation for simulation that is to be used for augmenting operational test evaluation. One can design an effective test for a system without understanding precisely how a system behaves. For example, simulation can be used to identify the most stressful environment without knowing what the precise impact of that environment will be on system performance.
The use of modeling and simulation to assist in the operational evaluation of defense systems is relatively contentious. On one side, modeling and simulation is used in this way in industrial e.
Simulation can save money, is safer, does not have the environmental problems of operational test, is not constrained in defense applications by the availability of enemy systems, and is always feasible in some form. On the other side, information obtained from modeling and simulation may at times be limited in comparison with that from operational testing.
Its exclusive use may lead to unreliable or ineffective systems passing into full-rate production before major defects are discovered. An important example of a system for which the estimated levels for measures of effectiveness changed due to the type of simulation used is the M1A2 tank. In a briefing for then Secretary of Defense William Perry see Wright, , detailing work performed by the Army Operational Test and Evaluation Command, three simulation environments were compared: constructive simulation, virtual simulation, and live simulation essentially, an operational test.
The purpose was to "respond to Joint Staff request to explore the utility of the Virtual Simulation Environment in defining and understanding requirements. The virtual simulation indicated that M1A2 was not better, which was confirmed by the field test. The problems with the M1A2 had to do, in part, with imm ature software.
The specific limitations of the constructive simulation were that the various assumptions underlying the engagements resulted in the M1A2 detecting and killing more targets. Even though the overall results agreed with the field. The primary problem was the lack of fidelity of the simulated terrain, which resulted in units not being able to use the terrain to mask movements or to emulate having dug-in defensive positions.
In addition, insufficient uncertainty was represented in the scenarios. In this section we discuss some issues concerning how to use validated simulations to supplement operational test evaluation.
The use of statistical models to assist in operational evaluation—possibly in conjunction with the use of simulation models—is touched on in Chapter 6. An area with great promise is the use of a small number of field events, modeling and simulation, and statistical modeling, to jointly evaluate a defense system under development.
Unfortunately, the appropriate combination of the first two information sources with statistical modeling is extremely specific to the situation. It is, therefore, difficult to make a general statement about such an approach, except to note that it is clearly the direction of the future, and research should be conducted to help understand the techniques that work. Modeling and simulation have been suggested as ways of extrapolating or interpolating to untested situations or scenarios.
There are two general types of interpolation or extrapolation that modeling and simulation might be used to support. First, in horizontal extrapolation, the operational performance of a defense system is first estimated at several scenarios—combinations of weather, day or night, tactic, terrain, etc.
Simulation is then used to predict performance of the system in untested scenarios. The extent to which the untested scenarios are related to the tested scenarios typically determines the degree to which the simulation can predict performance. This extrapolation implies that the tested scenarios need to be selected to the extent possible so that the modeled scenarios of interest have characteristics in common with the tested scenarios see discussion in Chapter 5 on Dubin's challenge. One way to ensure this commonality is to use factor levels to define the modeled scenarios that are less extreme than those used in the tested scenarios.
In other words, extrapolation to an entirely different sort of environment would be risky, as would extrapolation to a similar environment, but at a more extreme level.
The closer the untested environment is to the tested one, the closer one is to interpolation than extrapolation. However, if no tested environments included any rain, it would be risky to use a simulation to extrapolate to rainy conditions based on the system performance in dry conditions.
Accelerated life testing, discussed in Chapter 7 , is one way to extrapolate with respect to level. Second, vertical extrapolation is either from the performance of a single system against a single system to the performance of multiple systems in a. The first type of vertical extrapolation involves an empirical question: whether the operational performance estimated for a single system can be used in a simulation to provide information about multiple system engagements.
Experiments should be carried out in situations in which one can test the multiple system engagement to see whether this type of extrapolation is warranted. This kind of extrapolation should often be successful, and given the safety, cost, and environmental issues raised by multisystem engagements, it is often necessary.
The second type of vertical extrapolation depends on whether information about the performance of components is sufficient for understanding performance of the full system.
There are systems for which a good deal of operational understanding can be gained by testing portions of the system, for example, by using hardware-in-the-loop simulations. This is, again, an empirical question, and tests can be carried out to help identify when this type of extrapolation is warranted. This question is one for experts in the system under test rather than a statistical question. First, extrapolation is made to different types of weather, terrain, and tactics.
Second, the extrapolation is made from several tanks to a larger number of tanks. The first extrapolation requires more justification than the second. In such situations, it might be helpful to keep the degree of true extrapolation to a minimum through choice of the test scenarios.
0コメント