> Root Entry Fg-ELWordDocument<oiObjectPool]@Q@Q6(SummaryInformation,HD1&(6(
!"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~82m9DocumentSummaryInformation8CompObj0j
!"#$%&'()*+,-./0123E:;<=>?@ABCDEFGHIJKNOPQRSTVWXYZ[\^_`abcdfghijklnopqrstuvwx{|}~
!"#$%&'()*+,.1
+ !"#$%&'(),-./01456789:>?@ABCDFGH Industrialisation stage, the design activity concerns the whole problem, while in the Exploration of the LSCO Opportunity stage and possibly in the Full Requirement Study stage only part of the problem may be under study in order to check their feasibility.
Description
The objective of this activity is to design, update or revise a resolution method for the problem under study. It is based on the analysis of the structural features of the problem description contained in the PDD and leads to the production of a document, the Problem Solution Deliverable that will guide the implementation of a software solution. In particular, during this activity, possibilities of hybrid decomposition of the problem are investigated.
The design activity is decomposed into 3 tasks:
The Algorithm characterisation identifies the algorithm to be implemented by studying the problem features described in the PDD.
The Problem modelling formulates the problem in a standardised form by describing the algebraic model of the problem and the data model, in order to prepare the implementation of the solution.
The Refinement task revisits the design because some implementation work has identified needs for modifications.
Figure 4.1 Design Activity
Participants
LSCO Technology Provider Side:
LSCO Project Manager
LSCO Expert (responsible for the execution of the Exploration of the LSCO Opportunity stage)
LSCO team (responsible for the execution of the Industrialisation stage. The involvement of the LSCO team is essential during the validation of the problem definition)
Input
Problem Definition Deliverable (+ initial Problem Solving Deliverable)
Output
(updated) Problem Solution Deliverable
Problem Solution Deliverable (PSD)
Design is a central activity of any LSCO project. Its goal is to prepare the implementation of a software solution to the problem that will take place during the implementation activity. It is thus essential that the design activity leads to the production of a document that will serve as a reference along with the PDD for the developers and later for the people in charge of the maintenance of the LSCO application.
Moreover, depending on the complexity of the problem, the design activity may be the result of a cumbersome iterative process. The PSD will then serve as a communication tool inside the LSCO team to discuss technical choices for the solution. It is also critical for the success of the LSCO project to keep a trace of all the changes that have been brought to the design and the reasons that have led to these evolutions. The PSD is a knowledge capitalisation tool that will help save time during the project as well as during future studies.
The PSD comprises a description of the future software architecture, and for each sub-problem (if an organisational decomposition is stated in the Problem Definition Activity) it defines the following three aspects.
1. Problem features and structure
This concerns the decomposition of a problem for technical reasons and the relations between sub-problems.
Additional information, like simplifications made with reference to the problem definition deliverable (PDD), should also be listed here.
2. Algorithm(s) characteristics
For each sub-problem identified in the technical decomposition, this part explains the choice of the type of algorithm based on the characterisation of the sub-problem and, if needed, its design (refer to section REF B_Ref439588370 \r \h Algorithm Characterisation Task). This can be very brief when using specific optimisation tools (e.g. Linear Programming solvers). However, it might require a detailed description if the proposed algorithm is based on heuristics or involves several solvers. Each evolution of the algorithm design should be carefully listed here with the reason for the changes.
3. Problem model(s)
For each sub-problem identified in the technical decomposition, this part gives the algebraic and data models corresponding to one of the algorithms listed in the previous section (refer to section REF B_Ref439588505 \r \h Problem Modelling task). Each evolution of the modelling should be carefully listed here with the reason for the changes.
Depending on the stage of the lifecycle in which the design activity takes place, the work and the resulting PSD may be more or less detailed.
In summary, when preparing the PSD, the designer should keep in mind its different functions:
a communication tool to publicise technical choices among the LSCO team,
a knowledge repository to collect the changes and the reasons of the evolution in the design of the solution,
a reference for the programming activity along with the PDD,
a reference for the maintenance of the LSCO application.
Algorithm Characterisation Task
Description
The algorithm characterisation task identifies the problem features which dictate the characteristics of the algorithm to be built. It may reveal a need for problem decomposition to ensure tractability.
Issues
The main issue is not to focus on one problem feature but to consider the problem from different perspectives (e.g. special features of the application, or, by contrast, the nature of the whole application domain). It is important to identify what is specific to the problem, but also to be aware of what is general and inherited from previously solved problems from the same application domain.
Milestones
This task should lead to a pre-selection of techniques for solving the problem. More certainly, this task will help removing techniques that are irrelevant for the problem at hand.
Guidelines
Four orthogonal types of guidance are provided:
A checklist of points and questions that point up the relevant technical knowledge to be extracted from the Problem Definition Deliverable.
A classification schema driven by the properties of the constraints and variables.
Different approaches to address the problem decomposition.
Some general flavours of do's and donts mapped to well-defined categories of LSCO problems.
Algorithms and software packages from constraint programming, mathematical programming or more generally Operations Research are successful for some classes of applications. However, when dealing with large-scale problems, there are usually fewer algorithmic options, since many computational procedures that are standard for small problems become unreasonably expensive in terms of efficiency and scalability. Moreover, the choices for large problems are more complex because of the critical effect of special problem structure on the algorithm efficiency.
Obviously, when tackling a new problem, it is preferable to use solutions that are already proven than to create (or re-create) new ones. Thus, to identify which algorithm or combination of solvers is most appropriate for an LSCO application, the extraction of the special features as well as the global characteristics of the problem is necessary. Not all problem characteristics are relevant to the search of an algorithm. A reasonable classification scheme is proposed based on different approaches to study the problem features.
One important aspect of LSCO design is that of problem decomposition, which can be required for tractability and efficiency reasons. When decomposition is needed the issue is how to decompose. Means to structure a problem into sub-problems are presented together with efficiently solved problem classes which could be identified as sub-problems in an LSCO. This crucial point should be in the designers mind.
Guideline: Initial problem analysis
Extracting problem features from the PDD
The role of this guideline is to point out the technical aspects that are embedded in the problem and that have then to be extracted from the PDD as a first step towards the algorithm characterisation. Not every aspect will set requirements on the algorithm characterisation. The following checklist of questions has identified the relevant aspects that contribute to the selection or non-selection of a technique, contributing to the algorithm.
An algorithm can not be defined if the problem is not understood. The main objective here is to understand the meaning of the requirements from a technical perspective. This is a prerequisite for the analysis of the problem features and components.
The questions are structured into three main groups as follows:
1. Problem structure (see classification of algebraic properties below)
Can you classify the application domain of the problem: scheduling, resource allocation, transportation, time-tabling, etc.?
Constraint classes. What are the semantics and mathematical properties of your constraints?
2. Quantitative aspects
Problem size. How big is the data set?
A priori search space dimension (when relevant) and problem complexity. How many decision variables do you foresee and what is their domain size ?
CPU time. Are they any hard constraints on the execution time of your algorithm ?
3. Qualitative aspects
Solution type. Are you looking for an optimal solution or not?
Objective function. How many criteria are you considering? Is the objective function well-defined?
What is the level of user interaction in the decision making process?
Are there guarantees needed for the optimisation quality?
Does robustness appear as an issue?
4. Dynamic aspects
In some cases, it is also important to consider the fact that the user will interact with the application, or that it may be beneficial if the user could provide guidance to the application. Then, several additional issues have to be considered:
The algorithm must react in a consistent way to user input. For example, when the user slightly modifies the problem, the application shall preferably provide a solution that is close to the previous one. In particular, the application shall offer the facility to guarantee to repeatedly produce the same solution to the same problem.
The algorithm must react quickly to user input. In mathematical programming, in the continuous case, duality, post-optimality and (warm) restart techniques from the previous basis allow very efficient interactivity. Constraint programming also allows quick modification since any new information will be propagated immediately to modify the current solution.
The algorithm must be organised so that the user can provide effective guidance.
The PDD provides answers to the above questions. In addition the PDD includes guidelines which refine these answers. The guidelines address the solution design activity, proposing ways in which the answers to these questions influence the algorithm design from different technical perspectives: algebraic properties, problem decomposition, and algorithm flavours.
Classification of algebraic properties
Based on the analysis done on the information contained in the PDD, the problem can be linked to broad classes of optimisation problems using its basic algebraic properties. Table 4.1 presents a list of problem characteristics and defines their corresponding mathematical classifications. There is at this point no concept of special problem formulation, the natural mathematical setting of the constraints is considered.
CharacteristicPropertyClassificationType of variableReal numbers
Both continuous real numbers and integers
Integers
SymbolicContinuous
Mixed Integer
Integer or discrete
Non-numerical
Type of constraints and objectiveLinear constraints and objective function
Quadratic constraints or objective
Other non linear constraints/objective
Logical and non arithmetic constraintsLinear
Quadratic
(Other) Non linear
Symbolic
ProblemNot subject to constraints
Subject to constraintsUnconstrained
Constrained
DataKnown with certainty
Forecast or probabilistic dataDeterministic
Stochastic
Table 4.1 Combinatorial optimisation problem properties
This classification will help the algorithm designer to see which types of methods could be used or should be disregarded in a broad sense, as explained in the section find the flavours... below.
For example, considering the classification according to the properties of the variables (continuous, discrete, mixed) and constraints, a broad characterisation of methods is the following.
The distinction between continuous and discrete decision variables has a very significant impact on the choice of algorithm. In general, best approaches for solving problems with continuous variables are mathematical. If the constraints are linear, then linear programming suffices. If the constraints are more complex (non linear or quadratic) then quadratic or semi-definite programming and further mathematical techniques may be required. In this field, convexity is a key issue. When the variables are all integer and the constraints linear, one should check if the integer nature of the problem is crucial, or if a linear relaxation with real numbers is a plausible option.
When integrality is a hard constraint (e.g. choices, people), methods from constraint programming combined with specific heuristic can be very powerful. Note also that mixed integer programming (MIP) techniques which approximate discrete variables by continuous ones can also be used. In this case, the cost of the continuous solution can serve as a guide in terms of bounding the solution value. Sometimes but seldom, the relaxed solution values can help in the search for integral values. However, in most cases, the integral optimum if it exists is quite different from the relaxed one and is not simply deduced by rounding up the real values.
Two aspects related to the properties of the search space are also to be considered:
Size of the search space: For each technique, the objective is to minimise the size of the search space to be explored. In constraint programming, this means finding good constraint propagation methods to reduce the domains of the variables and detect inconsistencies as early as possible. In mathematical programming, good cuts and computation of good lower bounds are required for similar purposes. In local search, it is important to limit the exploration to dominating solutions. For example, one of the key characteristic of the tabu search method of [Nowicki & Smutnicki 96] is a highly restricted neighbourhood for each solution.
Structure of the search space: The structure of the search space is also important. Of course, an open question at this point is how this structure can be characterised. In constraint programming, the most frequent characterisation consists of describing the problem as a graph. The variables of the problem are the nodes of the graph; the constraints are the edges (or the hyper-edges) of the (hyper-)graph. Different ways of exploring the search space (e.g., different variable instantiation orders or different backtracking strategies) may be more or less appropriate depending on the structure of the graph. In local search, the structure of the search space can be characterised by the neighbourhood relation, and the landscape of the optimisation function with respect to this relation [Mattfeld 96]. If many neighbour solutions have the same value, local search may proceed indefinitely from one of these solutions to another, thereby ignoring the remainder of the search space. On the other extreme, if good solutions are very far one from the other, it may be very difficult to reach a good solution from another. Local search will typically work best when the chosen neighbourhood relation is such that every sub-optimal solution is not far from a better one.
Symmetries constitute an important aspect of the problem structure. For example, when two activities of a given scheduling problem have the same characteristics and are subjected to the same constraints, it is not important which of the two activities precedes the other in a solution. Two equivalent solutions are obtained by exchanging these two activities. Fixing the order is then not a restriction in terms of optimisation quality. In general, symmetries can be exploited to reduce the exploration of the search space and guide the search [Puget93] [Smith et al. 96].
Guideline: Find the flavours of the algorithm based on problem categories
The characterisation of the algorithm to develop is a complex task, specially given the wide range of algorithms that can be integrated and combined together. The above approach is based on the problem features and structure. Another complementary approach presented here is to map an LSCO or sub-parts of it to dos and donts guidelines. These guidelines map different problem categories to solvers and more generally hybrid algorithms that have shown to be efficient or useless. The negative advice of which algorithm not to use can save a lot of time in terms of pruning a set of potential methods. Table 4.2 presents a collection of dos and donts for a large set of problem categories. It serves two purposes:
1. provides a wide view of problem categories and associated methods,
2. suggests which categories of sub-problems to look for, such that they can be tackled efficiently.
Problem categoryDosDontsSet covering,
set partitioning
including possibly some additional knapsack constraints
Set selection (coverage)Mixed Integer Programming
Column generation, for very combinatorial problems, allows an efficient technical decomposition
CP with partial search (LDS) local optimisation
Constraint propagation
with depth-first searchScheduling
Disjunctive scheduling (single tasks)
Scheduling with complex resources and tasksB&B + constraint propagation using literature shaving tabu heuristics
Read Demeulemeester' thesis [Demeulemeester 92],
B & B + resource histograms + partial search
MIP, genetic algorithms, simulated annealingProduction scheduling
Big size (multi-product, multi-machines), linear processes
Short range problem, including fine set up time, non-linear routes (job-shop problems), small size
MIP can bring satisfactory solutions, combined sometimes with heuristics for post-processing.
CP + heuristics.
If big size, decompose problems
Propagation on disjunctive constraints
MIPTime-tabling
Fixed set of activities
Variable schedules
Global constraints, flow algorithms.
LP model, local optimisation
LP, tabu, coverage techniques,
Column generation,
CP for schedules with global analysis
MIP
Propagation on local constraints Staff scheduling
Shifts construction:
Roster grids construction:
MIP with set covering/partitioning approach.
CP and heuristics.
Decompose and use hybrid approaches. Column generation.
Don't try to solve the whole problem in one round with one approach.
Matching
Assignment, resource allocationUse adhoc algorithm (matching flows) or
LP (simpler), search for good model relaxation => lower bounds and local optimisation
Constraint propagation
Assignment problems
Big and linear :
Small and non linear:
MIP
CP or hybridRouting problems
Travelling Salesman ProblemCP for small complex problems only,
Local optimisation by default,
Branch & Cut is an option
Dont build ad-hoc solutions - there are many published algorithms which work wellProblem categoryDosDontsVehicle routing problems
Local optimisationB&BTour problemsColumn generation MIP approaches proved to optimally solve such problems.
Heuristics, although sub-optimal, proved to solve them very efficiently
If max duration constraints, don't use network flow approach with LPFlow problems
Network optimisation
Optimisation problems with Boolean choices
(multi-periodic, multi-products with inventories and intermediate processes)
LP models + flows + global analysis
Good heuristics & local optimisation
MIPPure CP, genetic algorithms, Melting problemsPure LP (MIP)Others
Cutting problems
Bin Packing1 dimension: MIP can be OK.
Global constraint propagation
Several dimensions (2 or 3): simulated annealing proved to work in some cases.
MIP for more than 2 dimension problemsTable 4.2 Find the flavours
(- Abbreviations used: CP: Constraints Programming, LP: Linear Programming, B&B: Branch and Bound, LDS: Least Discrepancy Search, TSP: Travelling Salesman Problem.
- See Annex D for definitions of problem categories.
- Note that this table is neither exhaustive nor normative as some forms of problems that belong to a same category may not always be characterised by the same dos and donts.)
Guideline: Problem decomposition
Independently of the problem features and structure, an LSCO problem might need to be decomposed for technical reasons. Such a process is called technical decomposition.
Although current algorithms and methods for the various problem categories and classifications are continually progressing, it can appear that the problem is intractable as a whole, either due to its inner structure or due to some system requirement like CPU time. The decomposition may also aim at simplifying the maintenance of the application by slicing the problem into several nearly independent subproblems.
The designer should bear in mind that technical decomposition often leads to results that are sub-optimal and should thus only be used after a careful evaluation of alternatives. In order to reduce the effects of the decomposition on the quality of the results, hybrid algorithms that exploit different problem features should be investigated. Another and complementary solution is to anticipate when solving one sub-problem the following sub-problems. This will be detailed in the Algebraic Modelling task.
When to decompose a problem or not?
In order to decide whether or not to decompose a problem for tractability reasons, one must consider the limitations of existing algorithms, software and hardware technology. Consequently, one must bear in mind that this question is raised at a specific time for a specific application.
Clearly, to identify the need for technical decomposition given the structure and complexity of the search space, requires a good knowledge of the state of the art and current strengths and limitations of solving techniques (coming from different frameworks like Constraint Programming, Mathematical Programming, Stochastic methods). A brief introduction to these techniques is given in Annex B.
How to decompose an LSCO for technical reasons?
Considering the structure of sub-parts of the problem may lead directly to a good decomposition. For instance, if a sub-problem can be identified as a simple efficient linear problem, then handling this part of the problem as a one block can be an interesting decomposition. However, when the structure of the problem does not point to this kind of decomposition, the following two perspectives that respectively consider the constraint semantics and existing well-defined problem categories can be used.
Problem decomposition can be derived from various views like the semantics of some constraint sets or the identification of well-defined sub-problems. The first one is based on classes of well-defined problems and follows a mathematical programming reasoning. The second one is rather based on the constraint semantics and follows a constraint programming philosophy.
Focusing on problem categories
The first approach to problem decomposition is to search for well-defined (in the sense of existing mappings between models and efficient solvers) sub-problems of the LSCO application. It might well intersect with the semantics decomposition but can also be complementary. The strengths of this decomposition is its powerful use of problem features by efficient solvers. If a sub-problem has been extracted that corresponds to a pure problem, one can be confident that depending on the size of the problem, the literature provides solvers that make the best use of the problem features. The main weakness of this approach is that it constitutes a risk for the developer not to tackle the actual LSCO application. Indeed, it might push the developer to remove some side constraints that make the sub-problem impure with respect to the existing well-defined problems.
Table 4.3 presents industrial application domains together with instances of LSCO problems. The column Category gives some well-defined problems that appear as components of the problem instances for which do's and don'ts have been listed in Table 4.2. As an additional information, the classification column links the different categories with Table 4.1 based on the problem specific features. Definitions of the applications are given in annex D.
FieldApplicationCategoryClassificationAirline industry
Crew scheduling
Fleet scheduling
Shift planningSet covering
Set partitioning
Network flow
Assignmentvar: Integer
constr: Linear
data: Deterministic
Production manufacturingCutting stock
Knapsack
Bin packing
var: Mixed integer
constr: Non-linear
data: Deterministic
Production
schedulingCar sequencing
Resource allocation
Task schedulingResource constrained project scheduling
var: Integer
constr: Non-linear
data: Deterministic
Transportation industryFacility location
Vehicle routing
Tour problems
Set covering
TSPvar: Integer
constr: Linear
data: Deterministic
Telecommunication
and network industry
Network flow
Network optimisation
Frequency allocationMatching
var: Integer
constr: Linear
data: Stochastic
Wireless TelecommunicationFrequency allocationColoration, maximum independent setvar: Integer
constr: Quadratic
data: DeterministicPersonnel schedulingTime-tabling
Work force scheduling
Assignment
Matching flowvar: Integer
constr: Linear
data: Deterministic
FinancePortfolio optimisationKnapsackvar: Continuous
constr: Quadratic
data: Stochastic
Table 4.3 Examples of sub-problem categories within LSCO applications var= type of variables, constr= type of constraints, data= type of data
A good overview of the problem categories that are treated by specialised operations research algorithms can be obtained from the literature (and additionally from the Operations Research Library of Benchmark problems at Imperial College).
Problem decomposition based on problem categories: Staff scheduling problem
The staff scheduling problem is about planning and scheduling staff over a certain period of time (1 year, 6 months,). Designing a single global scheduling model on the horizon is usually feasible, but optimising and thus solving this problem fully is often impossible with existing algorithms and technology. A decomposition is required. Based on the analogies between the problem definition and some problem classes, the following technical decomposition may be applied:
Technical decomposition:
A shift scheduling problem: find the best shifts to cover the work for each day.
A roster problem: assign days-off and shifts in a roster grid.
For the first problem, a MIP algebraic model is adequate.
For the second problem, it seems suitable to decompose it into three sub-problems:
Assigning shifts to different types of smaller rosters using a MIP model,
Constructing the rosters by assigning day-off and shifts inside the duty-rosters using a CLP model,
Extending some shift lengths in order to satisfy the week working hours duration constraints. This last sub-problem is also solved using a CLP model but could be solved using a MIP model.
EMBED Unknown
Focusing on constraint semantics
The second type of problem decomposition concentrates on the semantics of the constraints at hand. Given the whole spectrum of LSCO applications, one can extract three semantics of constraints: time dependent constraints, geometrical constraints, and location dependent constraints. The strength of this approach is that it might suit a natural decomposition of the problem as suggested by the client (e.g. resource allocation versus scheduling). It also considers the constraints independently from any problem category (well-defined model/solver relationship), and thus does not set apart the so called side-constraints.
Time dependency
Applications like job-shop scheduling, vehicle routing, crew scheduling involve travel times and task durations. The tasks to be completed are time dependent. In these cases, any resource needed for the tasks are tied up over that period of time. For example, a flight crew cannot be used on two flights which overlap in time.
Time dependency gives rise to a very common class of symbolic constraints, broadly called non-overlap constraints. A non-overlap constraint prevents two tasks from being carried out during time periods which overlap. Either task A finishes before task B starts, or task B finishes before task A starts. This non-overlap constraint is difficult to handle using MIP and requires a linearisation, which introduces a large number of Boolean variables. Successful ways of handling such constraints are twofold:
Local Search. Make a choice (A before B or vice versa) for each such constraint, find a feasible solution which satisfies these choices, and then try to improve on the solution by reversing some of the choices.
Global Propagation. Consider sets of tasks that can not overlap pair wise. Check the sum of their duration, and reason on the possible orderings of the tasks within this set, or tasks which also can not overlap with this set. Without making choices, deduce earliest and latest start and end times for the tasks involved.
Space Dependency
Just as some kinds of tasks take up certain time periods, other kinds of tasks occupy certain spatial regions. Example of problem categories whose tasks occupy space are multi-dimensional packing problems and cutting stock problems. Again the underlying constraint is the non-overlap constraint. However for space-dependency constraints, there are typically two dimensions involved and for packing problems this can increase to three dimensions. Techniques suited for two-dimensional non-overlap constraints are global constraint propagation (see above), and stochastic techniques. For two and three dimensions simulated annealing has also proved useful.
Location dependency
Applications like workforce scheduling and vehicle routing allow resource data to be moved but incur a cost thereby. Consequently minimising the distances over which the resources have to be moved is an important criterion to be taken in the decision making process. Such problems which involve minimising distances have been intensively studied in the operations research community over the last decades (the Travelling Salesman Problem or The Steiner Problem are classic examples). The TSP has been solved by a variety of techniques including simulated annealing, and advanced MIP techniques. However, to solve large scale problems it may be necessary to solve a huge number of small routing problems which have to be coordinated by a master program.
Crew scheduling, fleet assignment and other applications where the manpower (resource data) changes location during a task, are traditionally handled by different techniques from routing problems. Often resources cannot be moved, except when carrying out a task, then the assignment of a specific resource to a task often dictates what subsequent tasks the resource is likely to be assigned to. This drives the decision making process and the selection of heuristics. For example, a good decision strategy is to assign a resource to a whole sequence of tasks rather than assigning a resource to a single task.
Constraint semantic-based decomposition: the Extended Car Sequencing Problem
The extended car sequencing problem adds to the standard CSP (see annex D) the fact that within a factory several assembly lines are considered. A production schedule for each assembly line should be produced. The Problem Definition Document does not require any organisational decomposition. However, for tractability reasons this problem can not be solved as a whole. A technical decomposition is necessary. The problem consists of:
Data:
A set of cars to be produced in an assembly plant for the next weeks
Plant characteristics (capacities of the assembly lines).
Hard/Soft Constraints:
The total volume of production per day for each assembly line is constant.
Restrictions on the car options allowed on the assembly lines when scheduling the cars (hard constraint).
Examples of constraints on options are: no more than two cars with ABS out of three cars, one car with opening roof out of five, etc.. (soft constraints which lead to a criterion such has minimise the number of violations).
Based on the analysis of the constraints semantics, the following technical decomposition can be applied successfully.
Technical decomposition:
Assigning car orders to days and assembly lines (location dependency)
Sequencing the cars for each day on the assembly lines (temporal dependency)
The first sub-problem can be easily modelled as a Large Scale MIP model. In addition, some constraints must be added to the model to allow for some anticipation in setting requirements on the options which will have to be satisfied in the second problem.
The second sub-problem can be modelled and solved in several different ways, for instance: using finite domain CLP; using a meta heuristic that involves a greedy heuristic followed by simulated annealing; using an MIP model solved by a hybrid algorithm based on a descent algorithm combined with a local optimisation.
Problem Modelling task
Description
The problem modelling task aims at describing the data and algebraic model of the problem to be solved. An algebraic model is a ready-to-code model of the problem that takes into account the algorithm characterisation, while remaining ideally independent of a programming platform.
Issues
The main issues concern the data representation, choice of variables and constraint formulation, as well as the efficient mapping between the model and the algorithm characterisation (e.g. constraint formulation, problem decomposition).
Milestones
The algebraic model should describe the problem under study in terms of mathematical expressions and logical formulas. The data model represents the data structures to be used in the programming activity.
Guidelines
A good problem model should represent the LSCO that will be programmed and solved. It should also reflect the problem definition as closely as possible.
Definition of an algebraic model
The algebraic model is derived from the conceptual model contained in the PDD. While the conceptual model was in natural language and remained independent of any solving method, the algebraic model is mathematical and depends on the algorithm characterisation.
The algebraic model should be general enough in its structure and syntax form. This will help for the implementation of hybrid solutions. This is not currently achieved as some models are very strongly related to the resolution techniques. For instance, if one considers the mathematical programming framework, the general form of a mixed integer programming model is:
EMBED Unknown
Now, considering the constraint programming framework, the model is a Constraint Satisfaction Problem (CSP) described as a triple:
triple(V,D,C) where
V: is a set of variables
D: is a set of domains of possible values for each variable (integers, real, sets)
C: is a set of arbitrary constraints (relations)
These models differ in many ways. On the one hand, the MIP model deals with a set of constraints in linear form whose variables are not bound to a restricted domain (a priori). It also requires an objective function. On the other hand, the CSP model considers a system of arbitrary constraints whose variables are bound by a domain.
The idea behind hybrid algorithms is to open the range of possible solvers and consequently not to be restricted to one single form of descriptive model. The form of the algebraic model should then give a general formulation of the LSCO problem in terms of set of indices, input data, variables, constraints and decision criteria. They describe the LSCO problem and allow all possible representations of constraints (mathematical or relational) and objectives.
Sets and indices
Sets represent sets of objects that belong to a common conceptual entity. Such sets are used in expressing the constraints. Indices introduce the dimension of the problem for every measurable iteration.
Example: Let C= {c1,,..,cn } be a set of cars.
Let j: {1,..,7} be the day of the week index ranging from 1 to 7.
Input data
Input data derive directly from the conceptual model. They represent any known information and attributes of an entity.
Example: colour(ci) : colour of the car ci.
Variables
Variables can be of two types: inherent to the problem definition or introduced for technical reasons (e.g. goal variables, Boolean variables). They do not necessarily match with the decision data presented in the conceptual model. The choice for the variables is very closely related to the methods and algorithms the programmer intends to use when coding the problem. For clarity, variables should be represented in different fonts from the input data. The bold font is used.
Example: produced(ci , j): the car ci is produced on day j (Boolean variable)
Constraints
Constraints can be of two classes and of many different types. The two classes distinguish between soft and hard constraints. However, one should bear in mind that some hard constraints might become soft at some stage of the resolution and reciprocally.
Constraint types can be structured by their semantics and mathematical properties. This is important for the mapping of the model to the algorithm, as well as during the implementation activity.
Example: The constraint EMBED Unknown means that for each day of the week the number of cars produced should not exceed 6.
Objective (or cost) function
The objective function is a minimisation or maximisation of some decision criteria. It represents the main goal(s) of the problem or sub-problem. The term objective refers in general to a function that has to be maximised when cost refers to a minimisation goal.
For example the maximisation of cars produced in the whole week is written:
EMBED Unknown
Guideline: How to build the algebraic model
In general, a good model is a formulation of the problem that is simple, concise, close to the problem semantics and well structured in terms of distinguishing the different constraint classes (hard/soft) and types (semantics, mathematical properties: convex, non convex, linear, non linear, over integers/real/sets).
Several issues are related to the building of a good algebraic model: selection of variables, selection of constraints, problem decomposition.
Variable selection
The choice of the variables depends greatly on the various types and size of indices sets relative to the unknown data, and on the solver that is to be used. There are mainly two alternatives in the choice of which data will be variables. A variable is an entity that is a priori unknown and that can take part :
in the objective function either to be maximised or minimised,
in a constraint.
The variables that should be maximised are those that represent necessary or desirable quantities in the decision making process (e.g. jobs, tasks, items to be delivered, journeys to be scheduled), while the ones that generate a cost should be minimised (e.g. machines, vehicles, fuel, raw material, people, time and money). The goal is to maximise the benefits while minimising the costs. The decision making process can be driven by focussing on the benefits (modelled as variables) and assigning resources to them (domain values) or vice versa by focussing on the cost variables and allocating tasks to them. As a basic guideline, if the solver is a constraint programming method, it is preferable to have many variables with small domains. For example, if there are many resources and few tasks it is preferable to consider the resources as variables and the tasks as domain values. Now, if the solver is a linear programming package, the choice is driven by the objective function. The choice of variables should lead to an objective function that helps the search (see literature).
Constraint and decision criteria formulation
The selection of constraints follows the variable selection. However, their formulation is a more subtle task that should take into account the type of solver and search strategy intended to be used. Constraints represent relationships between the different entities (variables and data). They link together the benefits and the costs, the tasks and the resources. There are different types of relationships between tasks and resources from a 1-1 to a m-n relationship. For example, considering the case of a 1 to n constraint, this means that a task requires n resources. Due to algorithmic complexity and variable indexing, the design of 1-n constraints can be generally decomposed into a set of 1-1 constraints. The handling of such constraints would then be done by optimising one resource and then extending the solution by an optimal assignment of the second resource. A classic example is in the airline industry, where plane types are scheduled to flights, and then crews to the plane types. This is not optimal but often, such a breakdown is anyway dictated by internal divisions within the client organisation.
The formulation of the decision criteria and objective function, derives also from the selection of the variables and the size of the indices sets. A given LSCO has a number of closely related optimisation problems which result from maximising the benefits (e.g. maximising number of completed tasks), or minimising the costs (e.g. minimising number of resources). An increase in benefits incurs an increase in cost; alternatively the completion of the given tasks incurs a certain cost. Both types of variables should not occur together in the objective function. So one has to optimise on one side (maximise activities or minimise resources) and to constrain the other one. The decision is driven by the choice of decision variables.
Dealing with problem decomposition
When the problem must be decomposed either for organisational or technical reasons, there should be one algebraic model per sub-problem. Each model is independent from the other one but connected by the input and output data and variables. At this stage of problem decomposition, the different solvers are not meant to intersect. For the moment, the complete model is thus a collection of sub-algebraic models. The figure below illustrates that the different algebraic models must be independent, as well as their corresponding algorithms.
EMBED Word.Picture.6
One important point in the data flow between models is the mapping between variables from one model to data or variables from another model. The designer should then ensure that the definition and name of variables from one model can be translated into another model and thus that a mapping exists. For instance, if a model attaches integer domains to the variables, e.g. Xij whose values correspond to a third index k (Xij = k), they can be transformed into variables with 3 indices, Xijk, that take their value in a binary domain ([0,1]) for an MIP model.
Also, the decomposition of a problem for technical, tractability reasons often results in sub-optimal solutions. In order to minimise these effects, it is necessary to modify the algebraic model of the upstream sub-problems by anticipating the issues that will be addressed in the downstream model. This can be done by adding connecting constraints between the two models.
Decomposition with anticipation: the Extended Car Sequencing Problem
(see presentation of the problem in Section REF B_Ref439590948 \r \h Space Dependency)
The problem is decomposed into two sub-problems for technical reasons:
1. Assigning car orders to days and assembly lines
2. Sequencing the cars for each day on the assembly lines
The sequencing sub-problem performances depend on the ratio of cars with given characteristics among the total number of cars assigned to a given day. However, there are no constraints in the allocation sub-problem that limit this ratio. It is then necessary to introduce in this first sub-problem a new constraint that will limit the ratio of vehicles with given characteristics assigned per day in order to anticipate the sequencing problem. Moreover, one may want to smooth the number of vehicles with given characteristics assigned per day in order to have each day a sequencing problem of approximately the same complexity.
Example of algebraic models
The following example illustrates the design of algebraic models for the general assignment problem with warehouse selection. An Mixed Integer Programming model is presented together with a CSP model.
The general assignment problem consists of assigning customers to warehouse without splitting the customers demand in more than one warehouse. A given number of warehouses has to be selected.
Note that the two algebraic models represent somewhat different variations of the warehouse assignment problem!
The algebraic model for an MIP solver
Indices :
i : warehouse
j : client
Data :
cij be the cost of assigning the demand of client j to warehouse i
N the maximum number of warehouse to select
nj: number of clients
ni : number of potential warehouses
Variables :
yi = 1 if warehouse is selected, 0 otherwise
xij = 1 if client j is assigned to warehouse i, 0 otherwise
Constraints :
( xij = 1 for all clients j
i
( xij ( yj for all warehouses i
j
( yi ( N
i
yi , xij binary (= 0/1)
Objective:
Min ( cij xij
i,j
Remarks:
The size of this model is the following:
number of variables : ni x nj
number of constraints : ni + nj + 1
It can be shown via polyedral analysis that having as many constraints as the number of xij variables, then the corresponding expanded model is much more efficient to solve:
xij ( yj for all (i,j)
The semantic of this constraint is that in order to assign client i to warehouse j, warehouse j must have been selected.
The corresponding MIP model has the same number of variables but has now ni x nj + ni + nj + 1 constraints. See [Jacquet-Lagrze (1998), chapter VII], for numerical experiments comparing the two models.
Algebraic model for a CP solver
This algebraic model is derived with the intention of using Constraint Programming technology (constraint handling methods + and search techniques). The greater the number of constraints and the smaller the number of variables the better it is, since in general this implies an a priori better pruning of the search space during the constraint handling phase.
Names:
Warehouses = {paris, london, athens, ...}
Clients = {smith, dupont, zorba, ...}
Data:
cost(+Client,+Warehouse,+Cost) (a table of costs)
MaxWarehouses (an integer)
Variables
CWC :: Warehouses (warehouse associated with client C)
CCostC (cost of serving client C)
OpenW :: {} .. Warehouses (set of open warehouses)
Constraints
CWC in OpenW for each client C (Each client is served from an open warehouse)
cost(C,CWC,CCostC) for each client C
# OpenW =< MaxWarehouses (The set of open warehouses is not too large)
Objective function
minimize ( CCostC
C
Remarks:
If there are nj clients and ni (potential) warehouses,
the size of this model is the following :
number of variables : 2nj + 1
number of constraints : 2nj + 1
size of the a priori search space: 2ni
Arc-consistency propagation is applied to the cost constraint, by adding an annotation to the algebraic model. Search is best done by choosing warehouses to put in OpenW.
Refinement Task
Description
The refinement task takes into account test profiling outcomes resulting from implementation work. The algorithm and design models are subject to changes for technical or efficiency reasons. In case the implementation outcomes are not satisfactory, one might have to tune the algorithm and adjust the algebraic model.
Issues
The main issues are the reformulation of the model, the monitoring of sub-models, the co-operation between solvers, and the hybrid decomposition of the problem.
Milestone
Once this task is completed, the PSD is updated. It should also keep trace of all the refinements done.
Guidelines
The main guideline is to focus on the implementation results to revisit the algorithm characteristics and ensure that the problem formulation is valid:
Improving the algorithm efficiency: hybrid decomposition
Revising the problem design
Guideline: Improving the algorithm efficiency
Because LSCO problems are often difficult ones, the first version of the algorithm may not deliver the expected results once it has been implemented and tested. Thus, it becomes necessary to tune the algorithm, which means (most often) in practice:
Refining the mathematical programs,
Refining the propagation algorithms,
Defining new neighbourhoods for local search,
Improving the branching strategy through better heuristics,
Experimenting with different control strategies.
Tuning is directed by the efficiency of the algorithm. The hard part is heuristic tuning, which means trying to improve the algorithm with no guarantee that this would actually be possible. There are a few systematic tricks that may be applied, mostly to the control strategies. They all mean trying different strategies (instead of the first one that came to mind) and can be time-consuming. Performance tuning often translates into an improvement of quality for algorithms that have a built-in bound to avoid excessive run times.
Tightening integration of solvers: Hybrid algorithm
The results of implementing a solution might raise efficiency and scalability issues. Often, a tight integration between different solvers could improve the pruning and search efficiency.
Some simple decomposition of LSCOs, inherent to the problem definition, has been previously introduced, as well as a technical decomposition when the problem is not tractable as a whole.
In these two cases there is one algorithm per algebraic model. The dependencies between solvers are mainly concerned with the data flows (input-outputs).
A hybrid decomposition is concerned with improving the efficiency and scalability of the algorithm. It requires a tight integration between solvers and models. It can seldom be investigated before attempting to use pure techniques successfully. A hybrid decomposition aims at integrating different solving techniques together (from OR, CP or stochastic methods) to reach the best pruning and search results from the problem formulation. This means identifying which parts of the problem would be ideally solved by which method, and how the sub-problems and the techniques can be integrated to solve the whole problem more efficiently. This task can be difficult.
Hybrid decomposition requires a close interaction, co-operation and exchange of information and data between the different sub-models and their associated solvers. For example, some constraints from one sub-model can be sent to different solvers which would achieve complementary pruning results.
The global structure of a hybrid LSCO model is illustrated below.
EMBED Word.Picture.6
Choosing the right solver(s) for a set of constraints depends on the solver adequacy. See annex B for details on the adequacy of various techniques for different classes of constraints.
The technical aspect of how to integrate different techniques will be presented in the programming activity section.
The following three examples illustrate the construction of hybrid algorithms.
The car sequencing problem
The Car Sequencing Problem can be solved by an iterative procedure using hill-climbing (a heuristic approach) with local optimisation of the best exchanges or swaps to select. An MIP model and algorithm are used to make local exchange of cars in a neighbourhood. After each optimisation, the current solution is changed, and a new neighbourhood is automatically derived for the following iteration.
A similar approach has been proposed with extremely good results (see Laburthe 1998) for the Travelling Salesman problem with time windows. The hybrid method combines hill-climbing with local optimisation of the tours.
An exact formulation of the problem is given by the following MIP Algebraic Model
Indices
1. c option criterion
2. v vehicle
3. r rank of the vehicle in the sequence
Input data
The constraints on the option criteria are expressed by a fraction numc / denc (i.e. no more than 2 vehicles with ABS out of 3). Then, the
1. numc the option c constraint numerator
2. denc the option c constraint denominator
Variables
xrv a binary variable equal to 1 if the vehicle v is sequenced at rank r
and 0 otherwise.
Zcr goal variable for the option constraints (the number of constraint c violation between rank r and rank r + denc)
Constraints
( v (r xrv = 1
( r (v xrv = 1
( c, r (v ((r ( r ( r + denc ) xrv zcr( numc
Objective function
The objective function expresses the minimisation of the total number of violations.
Min (r,c zcr
This problem is hard to solve. It is possible to find an optimal solution for 50 to 100 vehicles, but no more. A hybrid method using though this model only for local moves (swaps) gives interesting results.
A hybrid algorithm : a descent method with local optimisation
A hybrid decomposition uses hill-climbing combined with a local optimisation using a MIP model. This approach uses the MIP model and algorithm to make local exchange of cars in a neighbourhood. After each optimisation, the order is changed, so a new neighbourhood is automatically derived at the following master iteration.
The method consists in starting from an initial sequence, preferably not a bad one. This initial sequence can be the one given by the previous heuristic.
EMBED Unknown
The local search is performed solving at each iteration a MIP model similar to the global one, except it is restricted to local moves of a vehicle in its neighbourhood.
Let us remark that other models and algorithms can be implemented for the local optimisation.
Dynamic scheduling
This application shows how it is important to hybridise algorithms by taking into account the structural properties of the problem together with the strengths and search power of the different solvers.
The following problem is a dynamic variant of the classic scheduling problem. The aim is to restore consistency in a schedule which has been disrupted due to resource or activity change. The problem and its solution are fully described in [El-Sakkout and Wallace 98]. To avoid disruption, the new schedule should minimise the amount of change.
This problem contains a set of:
Activity constraints that define an area (variable) of activity by the variable quantity of resources used multiplied by the variable duration of the activity. They are non-linear constraints.
Temporal constraints that set rules of precedence and succession among the different activities. They are linear constraints.
Resource constraints that set the variable number of resources per activity and ensure that the total number of one resource type is sufficient to satisfy the concerned activities. Most of them are disjunctive constraints.
This problem could be modelled as a linear problem and sent to an on-the-shelf linear solver. However, the solver would not return, in the general case, integral optimum solutions and consequently would not address the feasibility issue at all when searching for a new consistent schedule. Indeed, one could round the relaxed real value and search for integral solutions but this is a costly search strategy in the sense that no information is learnt from the relaxed solution with respect to feasibility. But a lower bound value of the cost function is thus obtained.
Instead the best algorithm proved to be a tight integration of constraint programming and mathematical programming that makes the best use of each solver given the structural properties of subsets of the constraint set [El-Sakkout, Richards and Wallace 98]. The set of temporal constraints has a uni-modular property which implies that a linear solver guaranties to find optimum integral solutions. Thus, a new feasible schedule can be sought by first computing the integral values of the temporal variables and then using constraint propagation and search heuristics to make the rest of the problem consistent. If some resource constraints are violated then new temporal constraints are generated to set some activities apart in order to solve the resource violations. Then the new temporal constraints are added to the previous set of temporal constraints in the search for a feasible optimal sub solution.
This process is repeated until a feasible schedule is found. In case of failure the added temporal constraints are removed and others are tried. The sketch of the algorithm that restores feasibility in a scheduling problem is presented below. The following procedure is called at each branching node of the search tree.
EMBED Unknown
The branching strategy is based on the added constraint C. If its addition to the program leads to consistency and feasibility then the algorithm explores this branch, otherwise it backtracks and adds the negation of the constraint to both the LP and CP constraint stores.
A note on scalability
Column generation is a very powerful hybrid technique used to solve some large scale set covering linear problems, as for instance human resources planning or scheduling. For example, in a bus driver scheduling problem, although the optimal solution was found among a potential set of shifts as high as a million, only a small fraction (a few hundreds) of them were effectively introduced in the column generation master program.
The Column generation technique
The Column generation technique, when applied to combinatorial optimisation, is a pure genuine hybrid algorithm with two co-operating sub-problems using their respective model and algorithm (see [Minoux 1983, chapter 10].
A set covering problem allows a selection of a best subset of columns, and is solved by an MIP formulation of the problem or usually its relaxed continuous version. This problem has its own model based on a LP set covering model with additional constraints. It takes into account basic covering constraints (i.e. each task must be covered by a column), and global constraints like the number of available resources (trucks, staff,..).
Let aij = 1 if column j cover task i, 0 otherwise. A set covering problem is expressed as follow
EMBED Unknown
xj binary (= 0/1)
EMBED Unknown
A shortest path problem is defined on a valued graph whose values are derived from dual information obtained from the set covering problem. The associated shortest path algorithm must often take into account cumulative constraints in order to find feasible columns (i.e. feasible tours with a constraint on weight or time, schedule,).
This method is particularly powerful since it enables the optimal solution to be found when the decision variables are continuous. It has been used most successfully in multi networks optimisation models such as those found in telecommunication networks. When the variables are 0/1 variables and when most of the constraints are tasks to cover, the method very often allows us to find either an optimal solution or at least very good quality solutions, by allowing some branch and bound in the last LP problem. This method is also used to solve efficiently complex crew scheduling problems or bus driver scheduling problems. See also [Haouari, 1991] for application to the travelling salesman problem with time windows.
An example of this approach is the inventory management problem (see [Baptiste and al. 1998] for a problem description and solutions proposed) which will be addressed in Annex D. The problem consists in assigning resources (equipment) to orders (different places where equipment is required for construction). The set covering problem is the selection of columns which represent full feasible schedules of a single resource unit (an equipment unit). The shortest path problem is built on a graph for which nodes (vertices) are the orders, potential days of maintenance starting activity. The links of the graph (edges) represent precedence constraints between the orders, between maintenance activities and orders. A cumulative constraint in the shortest path algorithm allows the maintenance constraint to be taken into account.
Guideline: Revising the problem design
Once an algebraic model is defined, it will most probably be subject to changes, resulting from the implementation outcome and the tuning of the algorithm, comprising the building of a hybrid algorithm. In such cases, it is necessary to adjust the problem design correspondingly. Two types of changes are distinguished: minor and major changes. Minor changes do not require a revision of the complete model and correspond to local addition, deletion or reformulation of constraints, data. Major changes are sometimes necessary when the problem is intractable in its current formulation or the associated algorithm is not efficient enough. Major changes are concerned with dual formulations and simplifications when sub-problems have not been clearly identified.
Minor changes
Minor changes are essentially motivated by algorithm efficiency and hybrid decomposition. The main issue is to make sure that any transformation of a constraint set from one formulation to another does not loose any properties of the constraints. If it does, then the algorithm will most probably not produce the best results in pruning and search efficiency. Two main adjusting situations are identified: changing some constraint formulation (e.g. linearisation, from symbolic to numerical), and adding redundant constraints.
Changing constraint formulation
Changing the formulation of some constraints is required when one formulation can not be handled by some algorithm at hand, or when it looses some structural properties of the constraints. If possible the constraint formulation should remain as close as possible to the initial definition of the constraints. For example, it is not advisable to formulate disjunctive constraints with linear constraints and Boolean variables. The semantics underlying the integral and binary nature of the Booleans can not be tackled by a MIP solver. The solver will basically search for a relaxed solution and then round the real value to search for 0 or 1.
Also, in some cases it can be useful to have different formulations of one constraint for different types of solvers. For example, in case of non-linear constraints it can be useful to approximate in two ways. Firstly the constraints in the non-linear form can be handled by a real interval arithmetic solver, which yields increasingly precise rectangular approximations to smaller and smaller ranges of the constraint. Secondly a linear formulation can be used which statically approximates fixed ranges of the non-linear constraint, being handled by a linear solver. Both solvers will infer complementary information that might improve the pruning and search efficiency. This corresponds to the building of hybrid solvers.
Adding redundant constraints
In order to improve the model, the programmer may look for redundant constraints, i.e. constraints that are consistent with the algebraic model (respected by all solutions to the model), but which may lead to new inferences. Adding redundant constraints can have a tremendous benefit in terms of efficiency. Here are a few tips for finding such redundant constraints:
Introduce new variables to model the problem from another point of view and link both models. In particular, look for symmetrical situations. For instance, resource allocation problems where one has to match resources and tasks can be modelled either with variables describing the resource performing each task, or variables describing the set of tasks allocated to a given resource. It is interesting to use both models at the same time.
Carefully reconsider indexed variables, in particular, binary variable with multiple indices. It is often worthwhile to introduce new variables such as x = min(yi), yi = min(j such that xij > 0), zik = j such that xij = k, ...
These variables can, in turn, be used in order to express new constraints.
Take a global view of the problem for counting (summing or taking the minimum) of a set of elements. For multi-dimensional situations, various aggregation strategies are possible (counting first in one dimension, then in the other). This may lead to so-called mini-max (redundant) inequalities.
A global view can also give information about variables which are mutually exclusive. A bound on the maximum number of Boolean variables simultaneously set to true can significantly improve the quality of linear relaxations.
Combine several constraints in order to produce a new one. Within MIP, it is useless to consider linear combinations of linear constraints, which will not change the value of the relaxed problem; one has to combine linear constraints with integrality information (yielding for example the so-called Gomory cutting planes). This task is very hard, and users should look in the literature for the wealth of inequalities that have been proposed on academic combinatorial problems. The situation is easier in CP: first of all, linear combinations of linear constraints can improve the propagation scheme, especially when such combinations eliminate some variables and feature small coefficients. Other kinds of combinations can also be performed within CP, for example combinations of difference constraints and linear constraints (taking into account regret information).
In the case of MIP solvers, adding such redundant constraints which appears as cuts do allow a better behaviour of the Branch and Bound algorithms. More recent research works have proposed to use jointly B&B techniques with cut generations, yielding some hybrid techniques. For instance, a generalised assignment problem (with knapsack constraints), with x(i,j) being 0/1 assignment variables and y(i) being decision variables for the selection of the depots (i), is known to be hard to solve. However, if additional constraints (cuts) such as x(i,j) ( y(i), for all (i,j) are introduced, the problem becomes very easy to solve with a classical branch and bound method. The underlying polyhedrons of both LP models differ a lot.
Major changes
Some problems have to be more drastically transformed in order to enable the efficient use of some specific techniques. Two types of important reformulations can be distinguished : problem simplification and duality form.
Simplification
In some cases, there exists a good algorithm for the resolution of a simplified version of the problem. The problem can be simplified to experiment with this algorithm and extended later on to model the full requirements. The simplified version may be obtained by ignoring some constraints, linearising (even convexifying) functions, modifying optimisation criteria, or simply concentrating on some part of the problem, e.g., on the bottleneck resources of a scheduling problem. The main problem in exploiting the simplification is to derive a good solution of the full problem from a good solution of the simplified problem. This may not always be easy, either because ignored constraints are in fact hard to satisfy, or because their satisfaction implies a significant degradation of the solution quality.
Hybrid algorithms can be very useful in such a case. One may use one approach to solve the simplified problem (e.g. mathematical programming), then rely on another method to improve the quality of the generated solution with respect to the real problem constraints and criteria (e.g. local search).
Duality form
Problems can also be reformulated into equivalent or dual problems. In mathematical programming (in the convex and thus in the linear case), the dual of a minimisation problem is a maximisation problem that has the same optimal solution as the original (called primal) problem. This property proves to be extremely useful in a variety of ways: for instance, in problems where the number of constraints is much larger than the number of variables, resolving the dual problem rather than the primal one will achieve a substantial reduction in computation effort. In constraint programming, there is also a notion of dual. Roughly speaking, the dual of a problem is obtained by replacing constraints by variables, and variables by constraints [Dechter & Pearl 89]. When n variables must be mapped to n values, and the values of the variables must be all different, one can also create a new problem such that the variables of the new problem are the values of the initial problem, and conversely. In some cases, the initial problem and its reformulation can be linked together and solved in parallel (see [Jourdan 95]) or together [Jourdan & Lissajoux 95].
Guideline : On fractal design
The lifecycle presented in Section 1 showed that many stages of the project life (from the identification of a LSCO opportunity to the industrialisation) involved the design activity as well as the implementation activity. Indeed, design and implementation closely interact with each other: experiments made possible by a first implementation point out towards limitations in the design, which can be refined, which turns into a more specific implementation yielding further experiments, etc.
The LSCO team starts with an incomplete design, which is obtained by voluntarily restricting itself to a coarse level of analysis, which will be deepened when feedback from the implementation is available. This technique is called fractal design, because the program grows like a fractal that is built recursively. The design activity is similar at each stage in the lifecycle: it involves the design (and implementation) of a set of functions, at an increasing level of magnification. Each level of detail does not invalidate the previous level, but refines it by filling the blanks. As observed for a mathematical fractal, the shape at each level is approximately correct but the final object is still quite different from these iterative approximations.
The first step is to built the control flow and implements each optimisation function with a crude approximation (most often a heuristic). The design is focused on the data model and its control flow. The associated implementation is straightforward and can be used to check the validity of the control flow and the consistency of the input data. It is also used to gather statistical data about the problem, and about the different phases of the overall control strategy.
Then, the design and implementation of the major optimisation function starts, with the choice of the general principles.
For instance, the use of a truncated search to build a feasible solution and then the use of a local optimisation scheme may be selected. The design focuses on the framework to implement these two approaches, leaving aside the technical details. For instance, a generic branching strategy might be used and a crude neighbourhood defined. The implementation can be later tuned by using different choice heuristics or local moves.
Evidence about where the optimisation effort should be applied is gathered next, and the technical function designed in more detail. The tuning of the algorithm follows this scheme of fractal growth, successively replacing an optimisation function by a more refined one (and recording carefully the benefits associated with each substitution).
Finally, the parameters of the algorithm are fine-tuned. These steps are usually performed with the systematic use of a profiler (cf. section 5).
Fractal design has many interesting properties. By making the design and implementation activities interact throughout the project life (instead of designing first and implementing afterwards), it helps to reveal that some problems are too difficult to solve in early stages of the fractal growth. Thus, it can act as an effective way to reduce risk. It also produces quickly a prototype implementation that can be shown to the end-user, to validate some design options or to ask for some guidance. It is also an efficient way to avoid overspending the design budget into levels of details that will be invalidated later. Lastly, for complex problems where the objective function is ill defined, there may be too little previous experience available to follow the designers intuition. The fractal design process supports the designer with experimental evidence in order to build a proper design. By building expertise throughout the design, fractal design is one way to build strength using the virtuous cycle experiment ( design ( implementation without losing too much energy in a wasteful prototyping cycle.
In addition, fractal design is well suited to todays state-of-the-art upper CASE (Computer Aided Software Engineering) tools. In an ideal world, an upper CASE tool could be used to capture the design and semi-automatically produce the implementation. However, it is a sad reality of life than no such tools exist today. On the other hand, many tools are getting pretty close. The interest of fractal design is that upper CASE tools can be used in the two first stages of the lifecycle to automate the production of code, since these early steps are concerned with data models and control flows, for which the CASE tools are quite useful. For instance, the use of standard UML (Unified Modelling Language) notations for data & flow models improves the readability and the ease of maintenance of the design, and tools are available to produce code templates from such documents. The preferred practice is then to capture the early designs with such a tool (say, in the stages concerning the identification and the beginning of the exploration of the LSCO opportunity) and then to revert to more classical textual & graphical descriptions for the low levels of design. As a matter of fact, the interest of such an approach is to focus the obligation to produce a design document (often felt as a constraint by the LSCO team) on the very part where it is mandatory: the design of complex algorithms. The capture of early designs can be performed with a CASE tool such as WAM (WYDE) or Rose (Rational) or with a high level modelling language such as CLAIRE and ECLiPSe5 (used here as a DML: Design & Modelling Language). The use of such tools greatly increases the traceability of design decisions and reduces the cost of software development.
The practice of fractal design can be summarised as follows:
Design the data model and the control flow, using a CASE tool or a DML. Experiment to check the availability and consistency of input data, perform simple statistics. At this stage, the control flow is fleshed out with the crudest heuristics.
Design the various optimisation frameworks that will be necessary to solve the problem. The design can also be captured with a CASE tool or a DML. Reuse is both possible and desired. The implementation is still straightforward and the resulting software is used to gather more complex statistics about the problem.
Instantiate the optimisation framework with adequate functional parameters, starting with naive and generic techniques and progressively introducing ad-hoc and specialised ones (specific neighbourhoods, domain-dependent heuristics, etc.). Also, more redundant constraints can be added, the data model can be extended and the control flow may be refined (the fractal nature of the design applies to all components). This stage usually involves many improvement cycles (refine the design implement experiment refine the design).
Fine-tune each of the optimisation functions through extensive experiments. These experiments must be registered carefully and a journal of the fractal growth may be built. The result must include a proper design document (included in the PSD) that explains how the final version of the algorithm works.
The fractal decomposition is kept in the PSD since it helps in the understanding of the global structure and can be used as a training course for other developers, by providing with different abstractions of the software. Together with the use of upper CASE tools, this makes fractal programs easier to maintain and, therefore, cheaper to develop.
http://www.ms.ic.ac.uk/library/
http://www.wyde.com
http://www.rational.com
http://www.ens.fr/~laburthe/claire.html
5 http://www.icparc.ic.ac.uk/eclipse/
Esprit Project 22165 D3.4 The Chic-2 Methodology for Combinatorial Applications - Engineering for Optimisation Projects
Solution Design Activity PAGE 4.31
.A:"T >!&: &&TNPP8Q=P
&
TNPP &&TNPP "-- "--
"Arial-.
2
PSD569"Arial-.
i2
Problem%
/"Arial-.
i2
/Solution%
"Arial-.
i2
{oDeliverable( & -- "System-@ "---'-
$ & & --0q "-q(--'-
$p98 & & --@v "-nq--'-
$c/-u
& f"--z:-- "Arial-.
q2
wPDD5::"Arial-.
i2
Problem%
/"Arial-.
i2
Definition(
"Arial-.
i2
`kDeliverable( "----
Z"Arial-.
i2
Refinement4(,(@(, @."Arial-.
i!2
DProblem modelling0,,(@@,,(,,"Arial-.
i2
algebraic model((((($<((("Arial-.
i2
q
data model(((<((( @J"Arial-.
i2
Algorithm 4,,,@"Arial-.
i2
YCharacterisation4,((((((,,"Arial-.
i2
G find the ((((("Arial-.
i2
flavours ($(($( & ---@ "---'-
$|- &
$(p & --@ "---'-
$'
$(- &&TNPP &--a Z
:A% #&WordMicrosoft WordK
-Times New Roman- - "-i< "--]G-#2
K8KStaff Scheduling.%.%)%*)))-2
KProblem/**%?'- "-IE--=P-2
TqKShifts.) -2
KGeneration<%)%%*)'- "-I<E--=0P-2
TKRosters7* % -2
OKConstruction7*) )%*)'- "-8--
C-2
GwKMIPJ/Times New RomanX-2
K(Large scale)'!-2
oK'- "-e8--
YC-2
G2KShifts.) -2
KAssignment; ))?%)-2
GKMIPJ/-2
l
K(Large scale)'!-'- "-8--
C-2
GKRoster7* %-2
Kbuilding*)*))-2
KCLP72/-'- "-c8--
WC-2
GUKShifts.) -2
>Klengths%))) -2
Kextension%)%) *)-2
miKCLP72/' "-h88- "-A- "-E- "-A&- "-&E&- "-t- "-H8- "-H- "-t8t- "-8- "-8---t :Rgc
R .1@&`
&MathTypeTimes New Roman*wgw
@
-2
=Ny2
VSby2
V
by2
Vay2
hy2
Wcy Times New Roman*wgw|
-2
Vj2
VOij2
Vij2
ij2
ij2
ijSymbol
A!w*wgw
A
-2
2
2
V2
V2
V|2
V-"2
72
SymbolH
!w*wgwH
-2
2
?2
Times New Roman*wgw
B
-2
ri2
2
i2
V
i2
V 2
VNi2
i2
OiTimes New Roman*wgwH
-2
M y2
,y2
xy2
@ 2
V y2
V+y2
Vxy2
V7 y2
Jyy2
+y2
xyTimes New Roman*wgw
C
-2
;02
2
V 2
V>j
2
that 2
-such2
x
)u2
(u2
u 2
@Minh Times New Roman*wgwH
-2
| i2
i2
I
&
"Systemwf
-in:` .1`&&MathTypeSymbol
!w*wgw
-2
!Symbol.
!w*wgw.
-2
2
."3 Times New Roman*wgw
-2
iyTimes New Roman*wgw.
-2
<jTimes New Roman*wgw
-2
632
y2
Times New Roman*wgw.
-2
1)32
produced(c Times New Roman*wgw
-2
jy2
-i,
&
"Systemwf9
-o N:Rc
0 .1&`.&MathTypeSymbol
!w*wgw
-2
M2
Times New Roman*wgw.
-2
j32
iyTimes New Roman*wgw
-2
)2
3 2
:max(Times New Roman*wgw.
-2
)2
produced(c Times New Roman*wgw
-2
j32
i,
&
"Systemw f
-pr:*}-0 e &WordMicrosoft Word
D-Times New Roman- --$Z7Z7--- !9-- !9-- !9--- --p
Times New Roman-2
Conceptual<*.%%..*-2
Model 1N*.%*'----$s----uZ
Times New Roman-2
data flow.)).B'_~-2
data flow.)).B'--$s----$s$----$]]--- !:-- !:-- !o-----$,8--;A-2
Me
data flow.)).B'0P-2
Bt
data flow.)).B'&t----$6----$6--
&
&
--j--&0 -- 0-- 4-2
E
Algorithm<**%.E-2
Design 1a<% *.**'
&
&/--/--36-2
Dm
Algebraic<*%.%*%-2
u
Model 1aN*.%**'
&
-n1j=---
$?~2Yv------
$--
&
&%--_--& -- -- -2
Algorithm<**%.E-2
)'
Design 1b<% *.*.'
&
&)&--&)--
B-2
y
Algebraic<*%.%*%-2
'
Model 1bN*.%*.'
&
->J---
$L>)---SS---
$?fS--
&
&P----$/""----$/--
&
!-vv@---
$+U+~v----
$Ukv--wv!Times New Romanw-2
m
DesignB)$.3-2
V
ActivityB)..'-F:
0
-&WordMicrosoft Wordo
-Times New RomanHO- - "-$
]A]A "--- !:
-- !:
-- !o- "-{{-- "-$d{{8--K;Times New Roman-2
? odata flow.)).B'Z0-2
4 odata flow.)).B'&It "--- "-$n--- "-$n--
&
- "-LjQ--&
- "-.0H--u6`Times New Roman-2
: oSolver 1a.**%%**'
&
&r
- "-L/--<f4-2
8 oAlgebraic<*%.%*%-2
oModel 1aP*.%**'
&
"-nj-- "-
$~Yv-- "-,-- "-
$*A--- "-_X]--- "- 2H- -"W-2
oSolver 1b.**%%*.'&
- "- - -s-2
oAlgebraic<*%.%*%-2
oModel 1bP*.%*.'
&
"--- "-
$-- "-M0Y-- "-
$"=0bEE-- -B(- "- !yH--- .)$++
#
#$
$
$
$
$
$
$
{
{$kk
c
c$SS
K
K$::
2
2$""
$
$
$
$
$
$
$zz
r
r$bb
Z
Z$F
J
JF$F"J"J*F*$F:J:JBFB$FRJRJZFZ$FjJjJrFr$N~NVV~$f~fnn~$~~~~$~~$~~$~~$~~$~~$~~$&~&..~$?~?GG~$W~W__~$o~oww~$~~$~~$~~$~~$~~$~~$~~$)x-x-p)p$)`-`-X)X$)H-H-@)@$)0-0-()($)--)--- "---- "-
$. -A'--- "-
$|-- "-r5-- "-
$--- "-
$$g;Ge---2
: z
2&WordMicrosoft Word
-@Times New Roman0- - "-y\- "-- "-Ga--Fa@Times New Romanl-.#2
Initial sequence
.'--LD--LD- MD.2
MDit = 0
.'- "-o----vA!--uA!- ".2
"Local Search
.'--h--h- i.2
iwith MIP%
,.'&_ "--%\--&&OsT "--%pQQ--&&Yw^ "--%[y[--&&Pg--$P[f[P--&&mr "--%oo--&&dz--$doyod--&&uT "--%Qw--&&e--$~z~e~--&&h "--%e--&--,<--+<@Times New Romand- <.22
<A new sequence is obtained
.-'--V--U- .2
it = it + 1
.'--H--H- H.2
HEnd.'- "- -- --{g6--zg6- 7.2
7If it <
.7.2
7nMaxIter$
.'-:&B% K
;&WordMicrosoft WordSystem
-@Times New Roman*wgw
W
- --= ----
--
.)2
4. Select a violated28,,,,22,,2.."2
constraint,22'!,2.'----v@Times New Roman*wgw
G
- ./2
5. Resource feasibility:2H,'28,,,",2'8!2!.*-o
.;2
o
Choose a new ordering constraintC222',#,#2,H#2!2,!21#,22'!,2.-
.
2
CH.-
8.,2
8 to reduce violations.2!,22,,22,22'.].82
]i.e. by forcing apart temporal,P2/P!2!,21P,2,!Q,N22!,..2
variableso2,!,2,'.'--1--%- ../2
.2. Temporal optimisation2C,R82,228!R'2!28.*-..12
.Apply LP (Simplex) to theH22/;8!8N2,3!22,...)2
.temporal constraints,N22!,,22'!,2'.'---- - .2
ConsistentC22'',2.*'--<k/--5s7- w7.#2
w7 Optimum integerH2N2N2,1,!.*r.2
rvalues2,2,'.'&gP--%k#--
$!K!--&&"V--%;;--
$Q;&--&--i --` - .2
Empty:=N2/.*-EA.2
EA Exit withC2!I!8.A.2
Asuccesst'8,,,''.'&c
--%g --
$
--&&--%--
$--&--q--e~- .2
1. 2.*@ Arial
!w*wgw
-W.
2
W .-i.:2
iApply CP to all the constraintsH22/C82,2,,22'!,2'.'&--%--
$--&--- \_--! Rk- .(2
3. Determine set ofi2BH,,!N2,C',C2!.*
.+2
violated resource and22,,2+!,'22!,,,,22..)2
activity constraints,,2/,22'!,2'.'-:]! L .1`&&MathTypeTimes New Roman--
2
@Min Uk` 2
wa 2
wX12
w
for all t`~`kk`k2
wask i`k`Times New Roman1-
2
wij,, Times New Roman-- 2
j>Times New Roman1- 2
c`Times New Roman-- 2
i, Times New Roman1- 2
j>PSymbol- 2
> 2
w` 2
wPSymbol- 2
6 2
8Times New Roman-- 2
hx 2
wx 2
w ``Times New Roman1- 2
(j5 2
wJj5
&
"Systemn-:>,.p#->> _C&WordMicrosoft Word
System
-@Times New Roman*wgw$
- Root Entryg-`LWordDocument<oiObjectPool@Q@Q6(SummaryInformation,HD1&(6(
FMicrosoft Word Document
MSWordDocWord.Document.69q82m9!o ZV8<mZ @OY%1<$$>>??DDDDDDGHOHMMC}%&}Root Entryg-*FLWordDocumentXObjectPool@Q@Q6(SummaryInformation,HD1&(6(8m9DocumentSummaryInformation8_986288291X F@Q _986288292PF A_986288293JFA㿆
!"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMORSTWYZ[\]^_`abcdefghijlprvyz{}~՜.+,0HPdlt| EPU-NTUA)0Oh+'0
4@L
Xdlt|0>DChaniatNormalIC-Parc9CMicrosoft Word for Windows 95@(@Hd\ܖ@%@,}-
FImage Microsoft Word
MSWordDocWord.Picture.89q>/. C&WordMicrosoft Word
-_986288294
DF㿆㿆_986288296= F㿆І_986288301 3 FІ=_986288302* F=`e_986288303
" F`e@._986288304F@.`_986288306 F` Ole
P
!"#$%&'()*+,-./01234567:;<=>?@ABCDEFGHIJKLNOPQRSTVWXYZ[\^_`abcdfghijklnopqrstuvwx{|}~@ Arial"
!w*wgw"
-.
2
t".@Times New Roman*wgw
-.82
Set covering problem structure8,,22,!212!22,N'!2,2!,.-"m.2
"m
2. t*.-".
2
"
".-."2
Build the graphB222,1!,22.-m.2
m
3. l*.-.
2
".-.12
Initial cost on the graphc2,,2'222,1!,22.-'--Q
~U--F
ta- a.52
a
Set covering problem solving8,!,22,,828,28,R'2282.*-a.2
a
1. l2.@ Arial
!w*wgw
-.
2
..-.+2
Introduce new columnss2!222,,2,H,22N2'.N.72
N
generated (i.e. shortest path1,2,!,,2!,'22!,'2,2..2
in the graph)22,1!,22!.:a.2
:a
2. t2.-:.
2
:
..-:.52
:
Get the new optimal solutionH,2,2,H22N,'2222.-.52
(restart from current basis)!!,',!!!2N,2!!,22,''!.--& a.2
& a
3. 2.-& .
2
&
..-& .&2
&
Get dual values ofH,22,2,2,'2!.- .2
constraints,22'!,2'.--'-- Q
fd-- F
\p- t..2
t
Shortest path algorithm882,!,'!82!8222,!8R.*-`.2
`
1. 2.-`.
2
`
..-`.@2
`#
Update the cost of edged using dualH22,,2,,2'2!,21,22'2122,..2
values2,2,'.L.2
L
2. u2.-L.
2
L
..-L.;2
L
Find optimal shortest path using72222N,'22!,'2,22'21.-.#2
Dijkstra or FordH2'!,2!72!2.--8.2
8
3. k2.-8.
2
8
..-8.:2
8
Optimal path are new columns togH2N,2,2,!,2,H,22N2'2.-.2
introduce2!222,,.--$ .2
$
4. r2.-$ .
2
$
..-$ .C2
$ %
Stop if all reduced costs of path are822!,!,22,,2,2''2!2,2,!,.- .2
positive22'2,.-'&{s
2- -
$}f
0m
!--
&&0- -
$
$&--
&&- -
$
$--
&-xt /*:filename*
:files vim_win.txt /*:files*
:fiDFHYZhi
$%(*UVqr+,GH_`ce45LMPRz{67RSjkoquDaa]uDU]c0]
"#')=>YZqrvx < = X Y p q u w
5
6
:
<
s
t
89TUlmqs,-uDaab-13pq
3
4
O
P
g
h
l
n
"HIde|} $&<=XYpquwWzVU]uDuDaa^7/VX\gklc789:;B& p!r!!!,"."k"m""""####@%B%E%F%Q% &&&]uDVUVuDP?]U]U\&&&K&M&&&1'3'o'q'''--..#09001>1F1H111$2<2>2e2g222N3e3g33344T4V444445577~88<6<G<<<==>>F?W?@@ B'BEEEE,H.HKHLHdOnOoOPP3RCR6SFSGSKSLSUSSTUUU]ccUVUVcV]U]][UUVVVWWZXkXXX`YY$Z8Z_Z`ZuZZX[h[i[m[n[w[[[[\/]]]]]^^W__T`^`ddggootpppqq&q.q5q:qKqdqqqqqqqqq7r;rDrLrWr]rmrrrrrrrrr"sdshsqsxssssssssstttVcVUU]cU]]^ttt2totst|ttttttttttttt uuu}v~vvvxxxyyYyZyyy5z7zzz[{\{i{j{k{l{bdq;=Q~Ka"9;Qjl@CD]^U][]cU^
uDPbuD:bvKb
uDbb]uDPUVU]VO^jғӓhjmnyERY[rtŚǚ[f9:
%&%67DEFG'()*+uD4heuD:vKuDHdeuD:vKUceUce^bcuDd]euD:vKuD]VU]I+5LUuwO[@Ano߲·ַطٷNQ{}ĹŹƹ!"mo|}~ !"$&'()J]cJUceUceU^uDkuD:vKuD]VVcR)*UVWXY[\ghimo
]^`hk"#5DUhUhce]J]cUJUceZBG\]j[\^_`k,.TV[\pqrs35&(235 "#JSJ"^ceuDuD:vKuDbVc]U]hUT#$%&(/0345679@AGHIJKNOSTUXY\]_`adghjkoprtv]@ABUWþuDuD:vKuDbVUcJScc]^J]ccehJ]ceJSJ"UceKr+.)34:Nxz1
3
"$./z|BF&'DE ##%%''))J]cVceVcVcbuDоuD:vKUceuDeuD:vKceUuD uD:vKuD]G)++////555566%6'6==> >>>8>9>b>c>>>>>>????? ?&?'?+?,?/?0?uPaPuDP[]UcUVcV]cVc]chuDPJ]*GHYj+fSr*y x =
t4
o
#'xp#*p#p#p#p#p#p#p#p#p#p#p#p#p#p#p#p#p#p#p#p#p#p#p#p#p#p#p#p#p#p#p#p#p#p#p#p#p#p#(
p# e#
( ( V]]]]]]]]]]]]]]]]]]]]l l l 16789:;<BC]]]]]]]]]]]]]]]]]]l l !
4
4
& !p!!,"k"""""""#####]p#p#p#p#p#p#p#p#p#p#p#p#p#p#p#p#p#p#p#p#lp#
l ######>%?%@%B%D%E%F%Q%R%&& &&
&&&&&K&&1'o'''lp# lp# lp# '''*,---../000F11$2<2e22N3e334T444457~888>:?:f:g:p#p#p#p#p#p#p#Rp#p#p#p#p#p#p#p#p#p#p#p#p#p#p#p#p#p#p#p#p#p#p#p#p#p#p#Rp#u u
lp# I I$g:
<<<&<5<6<G<T<~<<<<<<<<<<=<=c===========>
>p#p#77777777777l(p#IIIIIIIIIIII
el(p#IIIIIIIIIIII !
>>>>)>H>V>a>b>c>d>>>c?!@BRESEEEE+H,H&MdOOORR1S2S3S4S5S6S77p#p#p#p#p#p#p#p#p#p#p#p#p#p#p#p#p# p#p#p#p#p#p#p#p#
l(p#IIIIIIIIIIII
e"6SGSLSTSUSdSuSSSSSSCTDTtTuTvTwTTTTTTTUKULUMU~UUUUUUU" ! ! " " " " " ! ! ! ! ! " " " " ! ! ! k( yp#IIIIIIIIIIII
k( yp#IIIIIIIIIIII "UUU/V0VVVVVVW%W&W'WNWOWPWQWUWVWcWdW|W}WWWWWWW X/X0X1X5X6X7XYXZXkXX! " " " " ! ! ! ! ! ! ! " " " " " ! ! ! ! ! ! " " k( yp#IIIIIIIIIIII
(XXXXXXXYYYY^Y_Y`YiYYYY Z
ZZ"Z#Z$Z8ZIZJZ`ZaZeZfZsZtZuZZZZZZZ[" " ! ! ! ! ! " " ! ! ! ! " " " " ! ! " " k( yp#IIIIIIIIIIII
([[W[X[i[n[v[w[[[[[[[\\K\L\\\\\\\.]/]T]y]z]~]]]]]]! ! " ! ! " " ! ! " ! ! " " " " " " ! ! " ! k( yp#IIIIIIIIIIII k( yp#IIIIIIIIIIII
"]]]]]]]^^m^n^^^^W__S`T`u`v`!abdddfgggi+kJkKknsptp! ! " " " ! ! p#p#p#p#p#p#p#p#p#p#p#p#p#p#p#p#p#p#p#Rp#
p#p#p#k( yp#IIIIIIIIIIII
#tpzppppppppppppqqq&q5qIqJqKqdqrqsq|qqqqqqqqq #l4yp#IIIIIIIIIIIIIIII
&l4yp#IIIIIIIIIIIIIIII qqqqr6r7rDrWrkrlrmrrrrrrrrrrrrrrs!s"s/sDsYsbscsdsqssssss #l4yp#IIIIIIIIIIIIIIII
'ssssttt2t?tUtVtatot|tttttttttttttuuuvvvvvxxx p#p#Qp#p#p#p#p#p#p#p#p#p' #l4yp#IIIIIIIIIIIIIIII
$xyYyZyyy5zzY{Z{[{m{n{o{{{~~~\ab;%9:.p#p#p#p#p#p#p#p#
p#p#p#p#p#p#Rp#p#p#p#p#p#p#p#
p#p#p#p#p#p# p#p#p#p#p#p#
p' p' p'
' "!"9؏ُPQj@ABCD\]^jkp#p#p#p#p#p#p#p#p#p#
p#p#p#p#p#p#p#p#p#p#p#p#p#p#p#lp# xxp'
' p' p' !fghjlmnyz78p#p#p#lp# I Ilp# lp# lp# 8=DEYrŚD"# eqr!"O[\Z˥̥p#p#p#p#p#p#p#=p#p#p#=p#p#p#p#Rp#p#p#p#p#Rp#p#p#p#Rp#p#p#p#Rp#p#p#p#Cp#p#p#p#p#p#p#p# & ' ( ) & ' ( ) (+WX&9:uɩʩ=>ڷ ɻʻ$%lۼܼQp#p#/p#p#p#p#p#Rp#p#p#p#p#p#p#Rp#
p# p#p#Rp#p#p#p#p#p#p#p#p#p#p#p#p#p#p#p#p#p#p#p#pZ' ' p' u u &QRno8k2Hlm{NTaf
2p#p#p#p#p#p#p#p#p#p#p#p#p#p#p#p#p#p#p#p#p#p#p#p#p#p#p#p#p#p#p#p#p#p#p#p#p#p#p#h(t?@`a06m
`96p#p#p#p#p#p#p#p#p#p#p#p#p#p#p#p#p#p#p#p#p#p#p#p#p#p#p#p#p#p#00"
486UvIJK\]^jkZ[\^_`aklp#p#p#p#p#p#p#p#
lp#lp# lp# xx
,T
ABTp#p#p#p#p#p#p#p#p#p#p#Rp#p#p#p#p#p#p#
xxlp# I Ilp#
lp#Z[tu0123@$%&2
./?@rp#p#p#p#p#p#p#p#p#p#p#p#p#p#p#p#p#p#p#p#p#p#p#p#p#p#p#p#p#p#p#p#p#p#p#p#p#p#p#p#p#p# ' p' *rstAB45U|Dp#p#p#p#p#p#*p#p#p#p#p#p#p#p#p#p#:p#p#p#p#p#p#p#p#p#p#p#p#p#
p#p#
p#p#p#p#p#p#p#p#p#R p' ' p' )qrpq$%()PQKLZ[k
x1
.zp#p#p#p#p#p#p#p#p#p#p#p#p#p#p#p# p#p#
p#p#p#p# p#p#p#p#p#p#Rp#p#p#p#p#p#Rp#p#p#p#
p#
xxxx p' p' 'z&' ""%'()+++0667p#p#
p#x p# p#p#p#p#p#p#Rp# p#p#p#p#Rp#
p#p#p#p#p#p#p# p#p#p#p#p#p#p#p#:p#p#p#
7$
48
"78B9W;<==>>8>b>>>???-?.?/?0?p#p#p#p#p#p#p#p#p#p#p#p#T& n#$
4h.
!K@Normala r@r Heading 1O
<4.8"
U[]c`n@n Heading 2M
<4U]c l@l Heading 3K
<4.U]cj@j Heading 4K
<4.]cd@d Heading 5G
<4.]^@^ Heading 6@
<4.Vcb@b Heading 7I
<4.d@d Heading 8I
<4.Vd @d Heading 9F
x4.ac"A@"Default Paragraph Font *@ Endnote Referenceh &@ Footnote Referenceh @ Header9r @" Footer9r )@1Page Number(@(TOC 1xxp#
U[$@$TOC 4Xp#
c"@"TOC 3p#
V"@"TOC 2p#
Z$@$TOC 5 p#
c$@$TOC 6p#
c$@$TOC 7p#
c$@$TOC 8xp#
c$@$TOC 9@p#
c$@$
Footnote Textc"B@" Body Text
<""@"Caption
xVOFigure
x}s2230<#9S
####################
0<0?!!!!!!!!! !
!!!
!!!!!!!!!!!!!!!!!!*6SB6PXXbmo[xDD
K10<# N]Uq
O
E
Q,GHYj+fSr*yx=t4
o
#'x8;
&p,k @"E" ##$$')***++,---F..$/7?7f7g7
99699:;c;d;;;c<!=?RBSBBBB+E,E&JdLLLOO1P2P3P4P5P6PUPQRVTZU`V$WuWXXwXXYZZ[[W\\S]T]u]v]!^_aaacdddf+hJhKhksmtmmKnnmoopqqqrrrsssssuuuvYvZvvv5wwYxZx[xmxnxoxxx{{{\|a~b~;%9:.!"9،ٌPQj@ABCD\]hm78=DEYrŗD"# eqr!"O[\Zˢ̢+WX&9:uɦʦ=>ڴ ɸʸ$%l۹ܹQRno8k2Hlm{NTaf
2t?@`a06m
`96UvIJK\]\`
,T
ABTZ[tu0123@$%&2
./?@rstAB45U|Dqrpq$%()PQKLZ[kx1
.z
&'"$%&(((-3345B6W89:::0<$44$$4 4DDD$44444$444$DD$4 4 $$44$44$|-&Ut^+)#)#'g:
>6SUX[]tpqsx8Q6rz70?
DEYh$()Uq+G_cd4LPQz6Rjop
"'(=Yqvw<Xpuv5:;s8Tlqr , 1 2 p
3
O
g
l
m
!Hd|$%<Xpuv[xixkx6DF')ִش![pr0<7
2%@2%@2%@2%@2%@2%@2%@2%@2%@2%@2%@2%@2%@2%@2%@2%@2%@2%@2%@2%@2%@2%@2%@2%@2%@2%@2%@2%@2%@2%@2%@2%@2%@::::::::::!F
_Toc449245588
_Toc449417429B_Ref439749978
_Toc449417430B_Ref439588370
_Toc449417431
_Toc449417432
_Toc449417433
_Toc449417434
_Toc449417435
_Toc449417436
_Toc449417437
_Toc449417438B_Ref439590948B_Ref439588505
_Toc449417439
_Toc449417440
_Toc449417441
_Toc449417442
_Toc449417443
_Toc449417444
_Toc449417445
_Toc449417446
_Toc449417447
_Toc449417448
_Toc449417449
_Toc449417450
_Toc449417451
_Toc449417452
_Toc449417453B_Ref413994466
_Toc449417454
_Toc449417455
_Toc449417456
_Toc449417457B_Ref439751348
_Toc449417458
_Toc449417459
_Toc449417460
_Toc449417461
_Toc449417462**?7dLT]+hoxEEeO+&RK
)Lk1<
!"#$%&'(*+e7Lt]Ihx[[6------I> a-->4 m@Times New Roman*wgw`
-m. 2
m
Initialisation'8!2'2!28.--m.2
m
1. *.PIC
NLMETA1TableeCompObjfObjInfoObjectPool WordDocument]SummaryInformation(UDocumentSummaryInformation8MOle
mPIC
kLMETAXCompObjVfObjInfoUEquation Native QOle
sPIC
!qLMETAH1Table $QCompObjofObjInfo#&nObjectPoolWordDocument%'SummaryInformation((DocumentSummaryInformation8zOle
w1Table),CompObjufObjInfo+.tWordDocumentSummaryInformation(-/DocumentSummaryInformation8Ole
PIC
02LMETACompObj15fObjInfoObjectPool47ننOlePres0000WordDocument68aSummaryInformation(9|DocumentSummaryInformation8xOle
PIC
:<LMETACompObj;?fObjInfoObjectPool>A
Ɇ
ɆOlePres000WordDocument@B SummaryInformation(Ole
PIC
CFLMETACompObjEGqObjInfoHEquation Native Ole
PIC
ILLMETACompObjKMqObjInfoNEquation Native Ole
PIC
ORLMETACompObjQSqObjInfoTEquation Native Ole
/PIC
UW-LMETA
CompObjVZhObjInfoWordDocumentY[SummaryInformation(\DocumentSummaryInformation8՜.+,D՜.+,L
px
4jTitre 6>
_PID_GUIDAN{178C0CF7-8945-11D2-BA00-000083283F72}Oh+'0
0<H
T`hpxssIC-ParcC-PC-PNormal.dotArnaud LINZ2naMicrosoft Word 8.0@F#@@@YbjbjWW==] PPPPP\P
tttttoqqqWP Ph
$WK
N
ttt"lt to4BoV#@, oth`4PPc
Initialisation
Set covering problem structure
Build the graph
Initial cost on the graph
Set covering problem solving
Introduce new columns generated (i.e. shortest path in the graph)
Get the new optimal solution (restart from current basis)
Get dual values of constraints
Shortest path algorithm
Update the cost of edged using dual values
Find optimal shortest path using Dijkstra or Ford
Optimal path are new columns to introduce
Stop if all reduced costs of path are positive
`b6CJ5CJ
jUmH7Gab56a
&F
&F
&F7Gab56a
N N!"\#!$%
[$@$NormalmH 2A@2Police par dfautZZ 8 @L(
H
C1?f
s*1111?
f
s*1111?
f
s*1111?
TB
c$8c?ZB
s*8c?ZB
@
s*8c?B
S ?!21<j4!4j2!2j0*'!.jp#!%)j/!:jP".1:j2-J)hJPs1sV.@56>*CJOJQJo(. @56>*CJOJQJo(. @56>*CJOJQJo(. Ps1s)h2-@HP LaserJet 4000 Series PSLPT1:HP LaserJet 4000 Series PSHP LaserJet 4000 Series PSHP LaserJet 4000 Series PSW 4dXA4PRIV''''HP LaserJet 4000 Series PSW 4dXA4PRIV''''T
@GTimes New Roman5Symbol3&Arial"qh,f,f!0IC-ParcArnaud LINZTimes New Roman"- - "- "--- "-I> a--=2 lTimes New Roman4- 2
l
Initialisation'8!2'2!28---2
l
1. *"Times New Roman-82
Set covering problem structure8,,22,!212!22,N'!2,2!,-2
!l
2. *"-"2
Build the graphB222,1!,22-2
l
3. *"-12
Initial cost on the graph2,,2'222,1!,22-'- "-Q
~U--E
r`-52
`
Set covering problem solving8,!,22,,828,28,R'2282--2
`
1. 2+2
Introduce new columns2!222,,2,H,22N2'-72
M
generated (i.e. shortest path1,2,!,,2!,'22!,'2,2-2
in the graph)22,1!,22!-2
9`
2. 252
9
Get the new optimal solutionH,2,2,H22N,'2222-52
(restart from current basis)!!,',!!!2N,2!!,22,''!--2
% `
3. 2&2
%
Get dual values ofH,22,2,2,'2!-2
constraints,22'!,2'--'- "-Q
fd--E
Zo-.2
s
Shortest path algorithm882,!,'!82!8222,!8R--2
_
1. 2@2
_#
Update the cost of edged using dualH22,,2,,2'2!,21,22'2122,-2
values2,2,'-2
K
2. 2;2
K
Find optimal shortest path using72222N,'22!,'2,22'21-#2
Dijkstra or FordH2'!,2!72!2--2
7
3. 2:2
7
Optimal path are new columns toH2N,2,2,!,2,H,22N2'2-2
introduce2!222,,--2
#
4. 2C2
# %
Stop if all reduced costs of path are822!,!,22,,2,2''2!2,2,!,-2
positive22'2,-' "-(i
- "--- "-
$!-- "--- "-
$---L>/.r#/״lJqJlJ
Min ci
Times New Romanxjj
aij
xj
1j
for all task i
FMicrosoft Equation 2.0DS EquationEquation.29q]! L .1`&&MathTypeTimes New Roman--
2
@Min Uk` 2
wa 2
wX12
w
for all t`~`kk`k2
wask i`k`Times New Roman1-
2
wij,, Times New Roman-- 2
j>Times New Roman1- 2
c`Times New Roman-- 2
i, Times New Roman1- 2
j>PSymbol- 2
> 2
w` 2
wPSymbol- 2
6 2
8Times New Roman-- 2
hx 2
wx 2
w ``Times New Roman1- 2
(j5 2
wJj5
&
"Systemn-L]!
FImage Microsoft Word
MSWordD*՜.+,D՜.+,L
px
6jTitre 6>
_PID_GUIDAN{FAF6B5BD-8AA5-11D2-BA01-000083283F72}Oh+'0x
(4@
LX`hpssIC-ParcC-PNormal.dotDIRECTION INFORMATIQUE2REMicrosoft Word 8.0Q@@@@
YbjbjWW==]BBBBBNBCnnnnn
#%%%EjPP
$d.
.
nnn
nn#&4
#
^#nbƶBB
Empty:
Exit with success
Optimum integer values
2. Temporal optimisation
Apply LP (Simplex) to the temporal constraints
Consistent
Apply CP to all the constraints
5. Resource feasibility:
Choose a new ordering constraint C to reduce violations.
i.e. by forcing apart temporal variables
3. Determine set of violated resource and activity constraints
4. Select a violated
constraint
+E]5
jUmHjCJUmH+,DE^9:yziiu_
&Ft{r$+,DE^ N N!u"#o$$%ocWord.Picture.89qL&B%<<
FImage Microsoft Word
MSWordDocWord.Picture.89q
[,@,Normal$CJmH 6@6Titre 2
$<5CJ2A@2Police par dfaut4~)i
4~)i 8@B(
B
3?f
s*1111?
l
01111?
f
s*1111?
r
611111?
r
611111?
TB
c$?TB
c$?r
611111?
TB
c$?
TB
c$?f
s*1111?
TB
c$?f
s*1111?
B
S ?
4*j +0.1j
@/3j`'pa'j(p/Aj +j' -
j`' a'Aj
!.j p#!.)j!Q+qj`'0!a'"jP"1* j`'a'jHrD
z@56>*CJOJQJo(. HrD@HP LaserJet 4000 Series PSLPT1:winspoolHP LaserJet 4000 Series PSHP LaserJet 4000 Series PSW 4dXA4PRIV''''HP LaserJet 4000 Series PSW 4dXA4PRIV''''ȴ1@GTimes New Roman5Symbol3&Arial"hP,P,
!0ZIC-ParcDIRECTION INFORMATIQUE&BM
;&WordMicrosoft Word
D-Times New Romanl- ----< ----
--
-)2
4. Select a violated28,,,,22,,2-"2
* constraint,22'!,2'----u}Times New Roman-/2
5. Resource feasibility:2H,'28,,,!,2'8!2!-;2
Choose a new ordering constraintC222',",#2,H#2!2,!22#,22'!,2-
2
CH-,2
8 to reduce violations.2!,22,,22,22'-82
qi.e. by forcing apart temporal,O22O!2!,22O,2,!P,N22!,-2
variables2,!,2,''--0--$-/2
-2. Temporal optimisation2C,S82,228!S'2!28--12
"-Apply LP (Simplex) to theH222=8!8N2,2!22,-)2
-temporal constraints,N22!,,22'!,2''-2
ConsistentC22'',2'4s7-#2
7 Optimum integerH2N2N2,2,!-2
rvalues2,2,''-k5---
$..K---;;---
$'O;--` -2
Empty:=N22-2
WA Exit withC2!H!8-2
Asuccess'8,,,'''-g
---
$
------
$----p--d|-2
1. 2:2
hApply CP to all the constraintsH222C82,2,,22'!,2''----
$----, \_-- Pj-(2
3. Determine set of2BH,,!N2,C',C2!-+2
violated resource and22,,2+!,'22!,,,,22-)2
activity constraints,,22,22'!,2''-x՜.+,D՜.+,,hp|
Titre 6>
_PID_GUIDAN{44AF3045-A3F2-11D1-9FBF-444553540000}Oh+'0d
,
8DLT\ssJacquet-LagrzeacqNormal.dotrJacquet-Lagrze2cqMicrosoft Word 8.0@@Y7@7Ibjbjs]F(((((
$VJf1
\
1((k (
((
v^( <7
Initial sequence
it = 0
Local Search
with MIP
A new sequence is obtained
it = it + 1
End
If it < nMaxIter
%&'-/;<=EFGacnpstuB*CJOJQJhmH nHB*OJQJhmH nHCJB*CJOJQJhmH nH
jUmH&'./<=FGbcoptu&'./<=FGbcoptuN N!"# $ %
[$@$NormalmH2A@2Police par dfaut(2N[`r(2N[`ru l,2$Y-(Dnpqd@(
B
3'r
6
r
6
B
3'r
6
r
6
<B
@
#'<B
@
#'
<B
#'
zBfCvDEF3f3v@`<B
#'
zBfCwDEF3f3w@` <B
#'
zBfCvDEFfv3dv3fv@`<B
#'r
6
r
6
r
6
B
3'r
6
B
S ?
4Ptrt t
,
)tL !
/t i
t_tv(tty t
Bt_tNEOt
t [R\tj Rtitt
kyt?At}@%p@GTimes New Roman5Symbol3&ArialY CG TimesTimes New Roman"pe"e"!0HJacquet-LagrzeJacquet-Lagrze՜.+,0HP\dlt| J:Oh+'0
DP\
ht|DIRECTION INFORMATIQUESKNormalORENAULT2KMicrosoft Word for Windows 95@Ik@@7@|7
FImage Microsoft Word
MSWordDocWord.Picture.69qL
ܥhW enaUjjjj
1-L=LX01jjj @R7jjjj
Algebraic Model 1a
Solver 1a
Solver 1b
Algebraic Model 1b
data flow
data flow
.A
t0j
&(<
Ky00�&# SR00&.%"{0z/K200& %Y(~'K20/&@
q00&% !
t0j%9&9(90
0&% . 0$|#"Y22K220(%!Kt0j(!/&/(<K0
0&#"ONK2.0$^#Y22K200&#ONK20/&%0/&%0/&^!,~}D0:'!o 7n720(a20(,%0?!#o&7'~}:o n7v7:o 7n7XD/:# ""+,67ABUVlmn[ uabcabc UabuDa+,67ABUV`aklmn
KNN
K@Normala"A@"Default Paragraph Font*>ITWn n n[ n
n)YO%W o@Fidelity\\triumph\fidelityPSCRIPTFidelityFidelityw 3dXXbSpcRdCustom page 1|CCCustom page 2|CCCustom page 3|CCFidelityw 3dXXbSpcRdCustom page 1|CCCustom page 2|CCCustom page 3|CC1Times New RomanSymbol&Arial"++!1DIRECTION INFORMATIQUERENAULT
-&WordMicrosoft Wordo
-Times New RomanHO- - "-$
]A]A "--- !:
-- !:
-- !o- "-{{-- "-$d{{8--K;Times New Roman-2
? odata flow.)).B'Z0-2
4 odata flow.)).B'&It "--- "-$n--- "-$n--
&
- "-LjQ--&
- "-.0H--u6`Times New Roman-2
: oSolver 1a.**%%**'
&
&r
- "-L/--<f4-2
8 oAlgebraic<*%.%*%-2
oModel 1aP*.%**'
&
"-nj-- "-
$~Yv-- "-,-- "-
$*A--- "-_X]--- "- 2H- -"W-2
oSolver 1b.**%%*.'&
- "- - -s-2
oAlgebraic<*%.%*%-2
oModel 1bP*.%*.'
&
"--- "-
$-- "-M0Y-- "-
$"=0bEE-- -B(- "- !yH--- .)$++
#
#$
$
$
$
$
$
$
{
{$kk
c
c$SS
K
K$::
2
2$""
$
$
$
$
$
$
$zz
r
r$bb
Z
Z$F
J
JF$F"J"J*F*$F:J:JBFB$FRJRJZFZ$FjJjJrFr$N~NVV~$f~fnn~$~~~~$~~$~~$~~$~~$~~$~~$&~&..~$?~?GG~$W~W__~$o~oww~$~~$~~$~~$~~$~~$~~$~~$)x-x-p)p$)`-`-X)X$)H-H-@)@$)0-0-()($)--)--- "---- "-
$. -A'--- "-
$|-- "-r5-- "-
$--- "-
$$g;Ge---
-&WordMicrosoft Wordo
-Times New RomanHO- - "-$
]A]A "--- !:
-- !:
-- !o- "-{{-- "-$d{{8--K;Times New Roman-2
? odata flow.)).B'Z0-2
4 odata flow.)).B'&It "--- "-$n--- "-$n--
&
- "-LjQ--&
- "-.0H--u6`Times New Roman-2
: oSolver 1a.**%%**'
&
&r
- "-L/--<f4-2
8 oAlgebraic<*%.%*%-2
oModel 1aP*.%**'
&
"-nj-- "-
$~Yv-- "-,-- "-
$*A--- "-_X]--- "- 2H- -"W-2
oSolver 1b.**%%*.'&
- "- - -s-2
oAlgebraic<*%.%*%-2
oModel 1bP*.%*.'
&
"--- "-
$-- "-M0Y-- "-
$"=0bEE-- -B(- "- !yH--- .)$++
#
#$
$
$
$
$
$
$
{
{$kk
c
c$SS
K
K$::
2
2$""
$
$
$
$
$
$
$zz
r
r$bb
Z
Z$F
J
JF$F"J"J*F*$F:J:JBFB$FRJRJZFZ$FjJjJrFr$N~NVV~$f~fnn~$~~~~$~~$~~$~~$~~$~~$~~$&~&..~$?~?GG~$W~W__~$o~oww~$~~$~~$~~$~~$~~$~~$~~$)x-x-p)p$)`-`-X)X$)H-H-@)@$)0-0-()($)--)--- "---- "-
$. -A'--- "-
$|-- "-r5-- "-
$--- "-
$$g;Ge---0
Oh+'0
$H
l
DhE-C:\WINWORD\MODELES\NORMAL.DOTDIRECTION INFORMATIQUEDIRECTION INFORMATIQUE@Ĕ@@I$@_Microsoft Word 6.02
FImage Microsoft Word
MSWordDocWord.Picture.69qܥe=e ffff
[1%
%=T@[[fff0ffff
Design
Activity
Algorithm Design 1b
Algebraic Model 1b
Algorithm Design 1a
Algebraic Model 1a
data flow
data flow
Conceptual Model 1
data flow
data flow
.A
2.0(Fi0-0&!PK2,0$!x&;po:xO wN;EN;:qxO ;
w;NXj)0`(!32$TY22K2jp/&/(<Kj/&/(<K&ONK2&[ZK2j(0`
32$TY22K2jp/&/(<Kj/&/(<K&ONK2&[ZK200&4
q00&%00&%00&^!,~}D0:'!o 7n72 0(a2!0(,%#0?!#o&7'~}:o n7v7:o 7n7XD0:# ""0/& /2/(R KD/:# ""0/&%0/&%0/&%0/&v~}D/:?uo 7n70/&v/"~}2/(y2/(#D0:?$o 7n7D/:?"o n7v7--%Z--&&--%--&&--,-ABUVjk~uabc Uababc
UabcuDa#,-ABUVjk~.NNKKK.K@NormalaA@Police par dfaut&:Ocny
7a+[ 3 7
g
=mCu@HP LaserJet 4Si/4Si MXLPT3:HPPCL5EHP LaserJet 4Si/4Si MX
D .sXHP LaserJet 4Si/4Si MX
D .sX1Times New RomanSymbol&Arial"J&FJ&F!@DIRECTION INFORMATIQUEDIRECTION INFORMATIQUE
+ !"#$%&'(),-./01*~ e &WordMicrosoft Word
D-Times New Roman- --$Z7Z7--- !9-- !9-- !9--- --p
Times New Roman-2
Conceptual<*.%%..*-2
Model 1N*.%*'----$s----uZ
Times New Roman-2
data flow.)).B'_~-2
data flow.)).B'--$s----$s$----$]]--- !:-- !:-- !o-----$,8--;A-2
Me
data flow.)).B'0P-2
Bt
data flow.)).B'&t----$6----$6--
&
&
--j--&0 -- 0-- 4-2
E
Algorithm<**%.E-2
Design 1a<% *.**'
&
&/--/--36-2
Dm
Algebraic<*%.%*%-2
u
Model 1aN*.%**'
&
-n1j=---
$?~2Yv------
$--
&
&%--_--& -- -- -2
Algorithm<**%.E-2
)'
Design 1b<% *.*.'
&
&)&--&)--
B-2
y
Algebraic<*%.%*%-2
'
Model 1bN*.%*.'
&
->J---
$L>)---SS---
$?fS--
&
&P----$/""----$/--
&
!-vv@---
$+U+~v----
$Ukv--wv!Times New Romanw-2
m
DesignB)$.3-2
V
ActivityB)..'-*}Vd e &WordMicrosoft Word
D-Times New Roman- --$Z7Z7--- !9-- !9-- !9--- --p
Times New Roman-2
Conceptual<*.%%..*-2
Model 1N*.%*'----$s----uZ
Times New Roman-2
data flow.)).B'_~-2
data flow.)).B'--$s----$s$----$]]--- !:-- !:-- !o-----$,8--;A-2
Me
data flow.)).B'0P-2
Bt
data flow.)).B'&t----$6----$6--
&
&
--j--&0 -- 0-- 4-2
E
Algorithm<**%.E-2
Design 1a<% *.**'
&
&/--/--36-2
Dm
Algebraic<*%.%*%-2
u
Model 1aN*.%**'
&
-n1j=---
$?~2Yv------
$--
&
&%--_--& -- -- -2
Algorithm<**%.E-2
)'
Design 1b<% *.*.'
&
&)&--&)--
B-2
y
Algebraic<*%.%*%-2
'
Model 1bN*.%*.'
&
->J---
$L>)---SS---
$?fS--
&
&P----$/""----$/--
&
!-vv@---
$+U+~v----
$Ukv--wv!Times New Romanw-2
m
DesignB)$.3-2
V
ActivityB)..'-L*}-0
max(Times New Roman~p~r~o~d~u~c~e~d~(~c~i~,~j
~) )i
"j
"
F"Microsoft diteur d'quations 3.0DS EquationEquation.39q Y .1 &T&MathTypeTimes New Roman-
2
@max(' Times New Roman8- 2
$j> 2
i>PSymbol- 2
c 2
Times New Roman8-2
produced(c
2
i,k`
2
j)Times New Roman-
2
)`
&
"Systemn-LTDH
<
"jTimes New Roman~p~r~o~d~u~c~e~d~(~c~i~,~j
~) d" 6i
"
F"Microsoft diteur d'quations 3.0DS EquationEquation.39q v .1`@&&MathTypePSymbol- 2
5" 2
PSymbol- 2
Times New RomanA- 2
Gjk Times New Roman- 2
oi>Times New Roman@-2
produced(c
2
i,k`
2
j)Times New Roman- 2
`
2
6`
&
"Systemn-L(
}}<
Min (ci
Times New Roman~x~i
~+hii
"
~y~i
)such that"j aij
i
"
~ ~x~i~
~+bij
i
"
~ ~y~i
d"bj
~ ~ ~x~i~
e" 0~,~ ~y~i
"N
F"Microsoft diteur d'quations 3.0DS EquationEquation.39q2
.1@ &`&MathTypeTimes New RomanP-
2
@Min Uk`2
such that`kk
2
`Gj k`
2
/ 0` Times New Roman-
2
88 2
i> 2
1
i>Times New RomanP- 2
c 2
h 2
`a 2
`Yb 2
`b 2
N Times New Roman- 2
i> 2
i> 2
,i>
2
`ij>>
2
`ij>> 2
`j>PSymbol- 2
L 2
W 2
`5" 2
` 2
` 2
` 2
PSymbol- 2
6 2
( 2
Times New RomanP- 2
rx 2
+ 2
y
2
`c x` 2
`]+
2
`
y` 2
6x 2
> 2
G,`
2
y` Times New Roman- 2
2i> 2
G
i>
2
`i >8 2
`i>
2
i >8 2
i>
&
"Systemn-L2
D՜.+,0HPhpxEuro-Decision2Oh+'0
HT`
lx06ijacquet-lagrezeNormal.dotjacquet-lagreze2Microsoft Word for Windows 95H@Ik@@\i^J@6gJ
FMicrosoft Word Picture
MSWordDocWord.Picture.89qܥhceffff
1
W
|e|0X2
ff
f$
CgJffff
Staff Scheduling
Problem
MIP
(Large scale)
Shifts
Assignment
MIP
(Large scale)
Roster
building
CLP
Shifts
lengths
extension
CLP
Rosters
Construction
Shifts Generation
NN!2/(%P"%0/&) %WV0
0&0*! 00&-*! 00&)*! 0
0&-)! 00&)*QP2/(0!+2/('+2/(p,+2/(/+@2/(+3'2n2/( 3'2n0 0&")A@0/&O#&0/&O#&00&&)&00&-&1>WdKucuDa $,-1@AHSWefmvz{|
K@Normala A@Police par dfaut.Si K
?o/a)[@HP LaserJet 6P/6MP - PostScript\\Dartagnan\hphpps41aHP LaserJet 6P/6MP - PostScriptHP LaserJet 6P/6MP - PostScriptg 3dXXuH쭘RdPage personnalise 1XCCPage personnalise 2XCCPage personnalise 3XCC
HP LaserJet 6P/6MP - PostScriptg 3dXXuH쭘RdPage personnalise 1XCCPage personnalise 2XCCPage personnalise 3XCC
1Times New RomanSymbol&Arial"A#5B#A2jacquet-lagrezejacquet-lagreze
!"#$%&'()*+,.A% #&WordMicrosoft WordK
-Times New Roman- - "-i< "--]G-#2
K8KStaff Scheduling.%.%)%*)))-2
KProblem/**%?'- "-IE--=P-2
TqKShifts.) -2
KGeneration<%)%%*)'- "-I<E--=0P-2
TKRosters7* % -2
OKConstruction7*) )%*)'- "-8--
C-2
GwKMIPJ/Times New RomanX-2
K(Large scale)'!-2
oK'- "-e8--
YC-2
G2KShifts.) -2
KAssignment; ))?%)-2
GKMIPJ/-2
l
K(Large scale)'!-'- "-8--
C-2
GKRoster7* %-2
Kbuilding*)*))-2
KCLP72/-'- "-c8--
WC-2
GUKShifts.) -2
>Klengths%))) -2
Kextension%)%) *)-2
miKCLP72/' "-h88- "-A- "-E- "-A&- "-&E&- "-t- "-H8- "-H- "-t8t- "-8- "-8--LA%NQĶ¾ľ24HJ}~ؿٿ"$LMXYcdhimo
&(+-FO=>"$)<Smzch#;@DQBG!35&(79Y]
+ !"#$%&'(),-./01456789:>?@ABCDFGHadhjloV\+-IJ+3)u}
"$w
s~ORyz"%bpv}&&--33B6M6:::::
;;;; ;$;';7;:;>;A;K;L;U;V;a;d;h;k;~;;1< DaBigBossC:\WINDOWS\TEMP\Chap4b.rtf DaBigBoss*C:\Gkost\CHIC2\DELIVERA\Method\Ch_4_w6.doc DaBigBoss*C:\Gkost\CHIC2\DELIVERA\Method\Ch_4_w6.doc DaBigBoss*C:\Gkost\CHIC2\DELIVERA\Method\Ch_4_w6.docIC-Parc0\\TRIUMPH\MEMBERS\Chic2_Methodology\Ch_4_mgw.doc@fidelity\\triumph\fidelityPSCRIPTfidelityfidelityw 3dXXOp2[
RdCustom page 1XCCCustom page 2XCCCustom page 3XCCfidelityw 3dXXOp2[
RdCustom page 1XCCCustom page 2XCCCustom page 3XCC::::Times New RomanSymbol&Arial
MgRevueTimesWingdings 2FBrush Script MT""Futura LtCn BTArial Narrow&Arial Black"V4+5f*5f #-)!V#0ChaniatIC-Parc
!"#$%&'()*+,-./0123E:;<=>?@ABCDEFGHIJKLNOPQRSTVWXYZ[\^_`abcdfghijklnopqrstuvwx{|}~ܥhW e0?oi:0066888:<:NNNNXjN:Lh"OT(T,T,T,TFrVLWgaiaiaia-aLdL.hhXLi#Lh8fX,T,TfXfXLhh`88,T"O
!"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~h`h`h`fX8,T8,Tga`89:67:88fXgah`h`Solution Design ACTIVITY
import "separator1.jpg" \* mergeformat \d
Chapter Contents
TOC \o "1-4"
4.1 Problem Solution Deliverable (PSD) GOTOBUTTON _Toc449417430 PAGEREF _Toc449417430 4.3
4.2 Algorithm Characterisation Task GOTOBUTTON _Toc449417431 PAGEREF _Toc449417431 4.4
4.2.1 Guideline: Initial problem analysis GOTOBUTTON _Toc449417432 PAGEREF _Toc449417432 4.5
4.2.1.1 Extracting problem features from the PDD GOTOBUTTON _Toc449417433 PAGEREF _Toc449417433 4.5
4.2.1.2 Classification of algebraic properties GOTOBUTTON _Toc449417434 PAGEREF _Toc449417434 4.6
4.2.2 Guideline: Find the flavours of the algorithm based on problem categories GOTOBUTTON _Toc449417435 PAGEREF _Toc449417435 4.7
4.2.3 Guideline: Problem decomposition GOTOBUTTON _Toc449417436 PAGEREF _Toc449417436 4.9
4.2.3.1 Focusing on problem categories GOTOBUTTON _Toc449417437 PAGEREF _Toc449417437 4.10
4.2.3.2 Focusing on constraint semantics GOTOBUTTON _Toc449417438 PAGEREF _Toc449417438 4.12
4.3 Problem Modelling task GOTOBUTTON _Toc449417439 PAGEREF _Toc449417439 4.15
4.3.1 Definition of an algebraic model GOTOBUTTON _Toc449417440 PAGEREF _Toc449417440 4.15
4.3.1.1 Sets and indices GOTOBUTTON _Toc449417441 PAGEREF _Toc449417441 4.16
4.3.1.2 Input data GOTOBUTTON _Toc449417442 PAGEREF _Toc449417442 4.16
4.3.1.3 Variables GOTOBUTTON _Toc449417443 PAGEREF _Toc449417443 4.16
4.3.1.4 Constraints GOTOBUTTON _Toc449417444 PAGEREF _Toc449417444 4.16
4.3.1.5 Objective (or cost) function GOTOBUTTON _Toc449417445 PAGEREF _Toc449417445 4.16
4.3.2 Guideline: How to build the algebraic model GOTOBUTTON _Toc449417446 PAGEREF _Toc449417446 4.17
4.3.2.1 Variable selection GOTOBUTTON _Toc449417447 PAGEREF _Toc449417447 4.17
4.3.2.2 Constraint and decision criteria formulation GOTOBUTTON _Toc449417448 PAGEREF _Toc449417448 4.17
4.3.2.3 Dealing with problem decomposition GOTOBUTTON _Toc449417449 PAGEREF _Toc449417449 4.18
4.3.3 Example of algebraic models GOTOBUTTON _Toc449417450 PAGEREF _Toc449417450 4.19
4.4 Refinement Task GOTOBUTTON _Toc449417451 PAGEREF _Toc449417451 4.21
4.4.1 Guideline: Improving the algorithm efficiency GOTOBUTTON _Toc449417452 PAGEREF _Toc449417452 4.21
4.4.1.1 Tightening integration of solvers: Hybrid algorithm GOTOBUTTON _Toc449417453 PAGEREF _Toc449417453 4.21
4.4.1.2 A note on scalability GOTOBUTTON _Toc449417454 PAGEREF _Toc449417454 4.25
4.4.2 Guideline: Revising the problem design GOTOBUTTON _Toc449417455 PAGEREF _Toc449417455 4.27
4.4.3 Minor changes GOTOBUTTON _Toc449417456 PAGEREF _Toc449417456 4.27
4.4.3.1 Changing constraint formulation GOTOBUTTON _Toc449417457 PAGEREF _Toc449417457 4.27
4.4.3.2 Adding redundant constraints GOTOBUTTON _Toc449417458 PAGEREF _Toc449417458 4.28
4.4.4 Major changes GOTOBUTTON _Toc449417459 PAGEREF _Toc449417459 4.29
4.4.4.1 Simplification GOTOBUTTON _Toc449417460 PAGEREF _Toc449417460 4.29
4.4.4.2 Duality form GOTOBUTTON _Toc449417461 PAGEREF _Toc449417461 4.29
4.4.5 Guideline : On fractal design GOTOBUTTON _Toc449417462 PAGEREF _Toc449417462 4.29
Position in the lifecycle
During the
"'(=Yqvw<Xpuv5:;s8Tlqr , 1 2 p
3
O
g
l
m
!Hd|$%<Xpuv[xixkx6DF')ִش![pr0<7
2%@2%@2%@2%@2%@2%@2%@2%@2%@2%@2%@2%@2%@2%@2%@2%@2%@2%@2%@2%@2%@2%@2%@2%@2%@2%@2%@2%@2%@2%@2%@2%@2%@::::::::::!F
_Toc449245588
_Toc449417429B_Ref439749978
_Toc449417430B_Ref439588370
_Toc449417431
_Toc449417432
_Toc449417433
_Toc449417434
_Toc449417435
_Toc449417436
_Toc449417437
_Toc449417438B_Ref439590948B_Ref439588505
_Toc449417439
_Toc449417440
_Toc449417441
_Toc449417442
_Toc449417443
_Toc449417444
_Toc449417445
_Toc449417446
_Toc449417447
_Toc449417448
_Toc449417449
_Toc449417450
_Toc449417451
_Toc449417452
_Toc449417453B_Ref413994466
_Toc449417454
_Toc449417455
_Toc449417456
_Toc449417457B_Ref439751348
_Toc449417458
_Toc449417459
_Toc449417460
_Toc449417461
_Toc449417462**?7dLT]+hoxEEeO+&RK
)Lk1<
!"#$%&'(*+e7Lt]Ihx[[6