Recall from queueing theory that in essence all queuing systems can be broken down into individual sub-systems consisting of entities queuing for some activity (as shown below).
Typically we can talk of this individual sub-system as dealing with customers queuing for service. To analyse this sub-system we need information relating to:
- arrival process:
- how customers arrive e.g. singly or in groups (batch or bulk arrivals)
- how the arrivals are distributed in time (e.g. what is the probability distribution of time between successive arrivals (the interarrival time distribution))
- whether there is a finite population of customers or (effectively) an infinite number
- service mechanism:
- a description of the resources needed for service to begin
- how long the service will take (the service time distribution)
- the number of servers available
- whether the servers are in series (each server has a separate queue) or in parallel (one queue for all servers)
- whether preemption is allowed (a server can stop processing a customer to deal with another "emergency" customer)
- queue characteristics:
- how, from the set of customers waiting for service, do we choose the one to be served next (e.g. FIFO (first-in first-out) also known as FCFS (first-come first served); LIFO (last-in first-out); randomly) (this is often called the queue discipline)
- do we have:
- balking (customers deciding not to join the queue if it is too long)
- reneging (customers leave the queue if they have waited too long for service)
- jockeying (customers switch between queues if they think they will get served faster by so doing)
- a queue of finite capacity or (effectively) of infinite capacity
Note here that integral to queuing situations is the idea of uncertainty in, for example, interarrival times and service times. This means that probability and statistics are needed to analyse queuing situations.
Whilst queueing theory can be used to analyse simple queueing systems more complex queueing systems are typically analysed using simulation (more accurately called discrete-event simulation). Precisely what discrete-event simulation is will become clearly below.
In OR we typically deal with discrete-event simulation. Note here however that the word simulation has a wider meaning. For example you may have heard of aircraft simulators which reproduce the behaviour of an aircraft in flight, but in reality one never leaves the ground. These are used for training purposes. You may have taken part in a business game simulation in which you were in charge of an imaginary company and the results for your company (sales made, profit, etc) were generated by a computer in some manner. These are often called business simulations.
An example simulation
To illustrate discrete-event simulation let us take the very simple system below, with just a single queue and a single server.
Suppose that customers arrive with interarrival times that are uniformly distributed between 1 and 3 minutes, i.e. all arrival times between 1 and 3 minutes are equally likely. Suppose too that service times are uniformly distributed between 0.5 and 2 minutes, i.e. any service time between 0.5 and 2 minutes is equally likely. We will illustrate how this system can be analysed using simulation.
Conceptually we have two separate, and independent, statistical distributions. Hence we can think of constructing two long lists of numbers - the first list being interarrival times sampled from the uniform distribution between 1 and 3 minute, the second list being service times sampled from the uniform distribution between 0.5 and 2 minutes. By sampled we mean that we (or a computer) look at the specified distribution and randomly choose a number (interarrival time or service time) from this specified distribution. For example in Excel using 1+(3-1)*RAND() would randomly generate interarrival times and 0.5+(2-0.5)*RAND() would randomly generate service times.
Suppose our two lists are:
Interarrival times Service times
1.9 1.7
1.3 1.8
1.1 1.5
1.0 0.9
etc etc
where to ease the processing we have chosen to work one decimal place.
Suppose now we consider our system at time zero (T=0), with no customers in the system. Take the lists above and ask yourself the question: What will happen next?
The answer is that after 1.9 minutes have elapsed a customer will appear. The queue is empty and the server is idle so this customer can proceed directly to being served. What will happen next?
The answer is that after a further 1.3 minutes have elapsed (i.e. at T=1.9+1.3=3.2) the next customer will appear. This customer will join the queue (since the server is busy).What will happen next?
The answer is that at time T=1.9+1.7=3.6 the customer currently being served will finish and leave the system. At that time we have a customer in the queue and so they can start their service (which will take 1.8 minutes and hence end at T=3.6+1.8=5.4).What will happen next?
The answer is that 1.1 minutes after the previous customer arrival (i.e. at T=3.2+1.1=4.3) the next customer will appear. This customer will join the queue (since the server is busy).What will happen next?
The answer is that after a further 1.0 minutes have elapsed (i.e. at T=4.3+1.0=5.3) the next customer will appear. This customer will join the queue (since there is already someone in the queue), so now the queue contains two customers waiting for service. What will happen next?
The answer is that at T=5.4 the customer currently being served will finish and leave the system. At that time we have two customers in the queue and assuming a FIFO queue discipline the first customer in the queue can start their service (which will take 1.5 minutes and hence end at T=5.4+1.5=6.9)What will happen next?
The answer is that ...... etc and we could continue in this fashion if we so wished (had the time and energy!). Plainly the above process is best done by a computer.
To summarise what we have done we can construct the list below:
Time T What happened
1.9 Customer appears, starts service scheduled to end at T=3.6
3.2 Customer appears, joins queue
3.6 Service ends
Customer at head of queue starts service, scheduled to end at T=5.4
4.3 Customer appears, joins queue
5.3 Customer appears, joins queue
5.4 Service ends
Customer at head of queue starts service, scheduled to end at T=6.9
etc
You can hopefully see from the above how we are simulating (artificially reproducing) the operation of our queuing system. Simulation, as illustrated above, is more accurately called discrete-event simulation since we are looking at discrete events through time (customers appearing, service ending). Here we were only concerned with the discrete points T=1.9,3.2,3.6,4.3,5.3,5.4,etc
Once we have done a simulation such as shown above then we can easily calculate statistics about the system - for example the average time a customer spends queueing and being served (the average time in the system). Here two customers have gone through the entire system - the first appeared at time 1.9 and left the system at time 3.6 and so spent 1.7 minutes in the system. The second customer appeared at time 3.2 and left the system at time 5.4 and so spent 2.2 minutes in the system. Hence the average time in the system is (1.7+2.2)/2 = 1.95 minutes.
We can also calculate statistics on queue lengths - for example what is the average queue size (length). Here the queue is of size 0 from T=0 to T=3.2, of size 1 from T=3.2 to 3.6, of size 0 from T=3.6 to T=4.3, of size 1 from T=4.3 to T=5.3, of size 2 from T=5.3 to T=5.4. Hence the time-weighted average queue size is:
[0(3.2-0)+1(3.6-3.2)+0(4.3-3.6)+1(5.3-4.3)+2(5.4-5.3)]/5.4 = 0.296
Note here however how the above calculations (both for average time in the system and average queue size) took into account the system when we first started - when it was completely empty. This is probably biasing (rendering inaccurate) the statistics we are calculating and so it is common in simulation to allow some time to elapse (so the system "fills up") before starting to collect information for use in calculating summary statistics.
Discussion
In simulation statistical (and probability) theory plays a part both in relation to the input data and in relation to the results that the simulation produces. For example in a simulation of the flow of people through supermarket checkouts input data like the amount of shopping people have collected is represented by a statistical (probability) distribution and results relating to factors such as customer waiting times, queue lengths, etc are also represented by probability distributions. In our simple example above we also made use of a statistical distribution - the uniform distribution.
There are a number of problems relating to simulation models:
- typically the simulation model has to be run on the computer for a considerable time in order for the results to be statistically significant - hence simulations can be expensive (take a long time) in terms of computer time
- results from simulation models tend to be highly correlated meaning that estimates derived from such models can be misleading. Correlation is a statistical term meaning that two (or more) variables are related to each other in some way. Often variance reduction techniques can be used to improve the accuracy of estimates derived from simulation.
- in the event that we are modelling an existing system it can be difficult to validate the model (computer program) to ensure that it is a correct representation of the existing system
- if the simulation model is very complex then it is difficult to isolate and understand what is going on in the model and deduce cause and effect relationships.
Once we have the model we can use it to:
- understand the current system, typically to explain why it is behaving as it is. For example if we are experiencing long delays in production in a factory then why is that - what factors are contributing to these delays?
- explore extensions (changes) to the current system, typically to try and improve it. For example if we are trying to increase the output from a factory we could:
- add more machines; or
- speed up existing machines; or
- reduce machine idle time by better maintenance.
- design a new system from scratch, typically to try and design a system that satisfies certain (often statistical) requirements at minimum cost. For example in the design of an airport passenger terminal what resource levels (customs, seats, baggage facilities, etc) do we need and how should they be sited in relation to one another.
Special purpose computer languages have been developed to help in writing simulation programs (e.g. SIMSCRIPT) as have pictorial based languages (using entity-cycle diagrams) and interactive program generators. A comparatively recent development in simulation are packages which run with animation - essentially on screen one sees a representation of the system being simulated and objects moving around in the course of the simulation. Details of packages for simulation can be found here and here.
Simulation began to be applied to management situations in the late 1950's to look at problems relating to queuing and stock control. Monte-Carlo simulation was used to model the activities of facilities such as warehouses and oil depots. Queuing problems (e.g. supermarket checkouts) were also modelled using Monte-Carlo simulation. The phrase "Monte Carlo" derives from the well-known gambling city of Monte Carlo on the Mediterranean in Monaco. Just as in roulette we get random numbers produced by a roulette wheel when it is spun, so in Monte Carlo simulation we make use of random numbers generated by a computer.
The advantages of using simulation, as opposed to queuing theory, are:
- it can more easily deal with time-dependent behaviour
- the mathematics of queuing theory is hard and only valid for certain statistical distributions - whereas the mathematics of simulation is easy and can cope with any statistical distribution
- in some situations it is virtually impossible to build the equations that queuing theory demands (e.g. for features like queue switching, queue dependent work rates)
- simulation is much easier for managers to grasp and understand than queuing theory.
One disadvantage of simulation is that it is difficult to find optimal solutions, unlike linear programming for example where we have an algorithm which will automatically find an optimal solution. The only way to attempt to optimise using simulation is to:
- make a change; and
- run the simulation computer program to see if an improvement has been achieved or not; and
- repeat.
Large amounts of computer time can be consumed by this process.
6 comments:
Thanks for the help. This was exactly that i needed to study in my Exam. Such brief explanation and accurate results. I couldn't be more thankful.
Nice Information thank you author
This is wonderful blog. This article is really worth reading. Hope you would like to read Single and Multiple server queue and Characteristics of Queue System
Post a Comment