Research Paper On Portfolio Optimization With Linear, Non-Linear And Integer Programming
Type of paper: Research Paper
Topic: Programming, Function, Integer, Vehicles, Objective, Mathematics, Optimization, System
Pages: 10
Words: 2750
Published: 2020/12/20
Integer Programming
Integer programming mathematical programming is the section in which some or all variables are constrained integer addition. The simplest method for solving integer programming problem is reducing it to a linear programming problem with a test result to an integer. For the special case of integer linear programming problem are problems where the variables X can take only two values - 0 and 1. The corresponding tasks are often called Boolean programming tasks. The most famous of these tasks - assignment problem (of the employee to put some work), the task of selecting a route (traveling salesman problem, the problem of the postman), the problem of the maximum matching, and so on. D. Integer Programming is used for solving the optimization problem of the company, in which mean 0 or 1 package of any equipment.
Here are some examples of integer linear programming problems.
Examples
Example 1
Knapsack problem. There is an m-vector of limited resources b=(b1,b2,,bm)that can be used for transportation of various cargoes in their characteristics. Each j-th cargo j=1,2,,n has the following properties: 1) indivisibility; 2) utility cj; 3) consumption of i-th resource for transportation unit j-th cargo is aij (i=1,2,,m;j=1,2,,n). Select a number of cargo for transportation, which maximizes total utility flight (total value of goods carried during the trip).
We construct a mathematical model of the problem. Denote by xj the number of selected items for hospital transport. Indivisibility demand corresponds to the condition:
xj≥0, xj- integer, j=1,2,,n.
Comparing the cost of each resource type for transporting cargo units, and their presence leads to a constraint:
j=1naijxj≤bi, i=1,2,,m
Total trip utility function is determined by:
Fx=j=1ncjxj
A special case of the problem mentioned above is a knapsack problem, when from each set of items j=1,2,,n an item can be chosen or not chosen. Then the mathematical model has the following form:
Fx=j=1ncjxj→max
Given
j=1naijxj≤bi, i=1,2,,m
xj=0 1,j=1,2,,n
Example 2
Find maximum of the function
Fx1,x2=x1+x2
in the area:
2x1+11x2≤38x1+x2≤74x1-5x2≤5x1≥0, x2≥0x1,x2-integer
We solve the problem geometrically. Extremum search area is a polygon OABCD. Since the line-level objective function parallel to the side BC polygon extremum is reached at the vertices of B (13/3, 8/3) and C (40/9, 23/9), as well as anywhere in the segment BC and is equal to 7. We are interested in only the points with integer coordinates; hence, neither B nor C is a valid solution to the problem. Rounding the value of the coordinates of points B and C, we obtain the point (4; 3) not belonging to the scope. It is easy to show that the integer optimum is achieved at the points M (2; 3) and N (3, 2) and is 5. Both points are within the search area.
Insolvency of rounding is also emphasized by the following considerations. Despite the fact that the integer variables are usually expressed number of indivisible objects may be other types of specification of these variables. So in the knapsack problem solution seems a Boolean variable (x = 0 or x = 1). In this case it is senseless to operate with fractional values of x and rounding procedure is logically unacceptable. This example shows that for solving integer programming problems need to be addressed specific optimization techniques. In addition, the optimal solution of integer programming problems do not necessarily belong to the boundary of the polyhedron (polygon) conditions, which is typical for linear programming problems.
Linear Programming
The problem of optimal planning is related to finding the optimum given objective function (linear form) in the presence of restrictions in the form of linear equations and linear inequalities related to linear programming problems.
Linear programming is the most developed and widely used mathematical programming section. This is explained as follows:
mathematical models of a very large number of economic problems are linear in the unknown variable;
these types of problems are currently the most studied;
for them developed special finite methods by which these problems are solved, and the corresponding standard programs to address them on the computer;
many linear programming problem being solved, now found wide application in the national economy;
some tasks that in the original formulation is not linear, after a number of additional constraints and assumptions can be linear or can be reduced to such a form that they can be solved by linear programming.
So, Linear programming is the direction of mathematical programming, learn techniques for solving extremal problems that are characterized by a linear dependence between variables and linear criterion. A necessary condition for the linear programming formulation of the problem are restrictions on the availability of resources, the amount of demand, production capacity of enterprises and other production factors. The essence of linear programming is to find the points of greatest or least value of a function at a certain set of restrictions on the arguments and form a system of constraints, which is usually an infinite number of solutions. Each set of variables (arguments of the function F), which satisfy the system constraints, the plan called admissible linear programming problem. The function F, the maximum or minimum is determined, called the objective function. Acceptable plan on which the maximum or minimum of the function F is achieved, is called optimal plan objectives. System constraints which determines the set of plans is dictated by the conditions of production. Linear programming problem (LPP) is the choice of the set of feasible plans the most advantageous (optimal). In the general formulation of linear programming problem as follows:
There are some variables x = (x1, x2, xn) and the function of these variables fx=(x1,x2,,xn), which is called the objective function. The task: to find an extremum (maximum or minimum) of the objective function f (x), provided that the variables x belongs to some domain G:
fx→extrxϵG
Depending on the form of the function f (x) and G and distinguish areas of mathematical programming: quadratic programming, convex programming, integer programming, etc. Linear programming characterized in that
a) The function f (x) is a linear function of the variables x1, x2, xn
b) G is defined by the system of linear equations or inequalities.
A mathematical model of any linear programming problem includes:
maximum or minimum of the objective function (optimality criterion);
system of restrictions in the form of linear equations and inequalities;
requirement of non-negativity of the variables.
Example
Suppose that there are two machines (S1, S2), each of which can produce two types of products (P1, P2). S1 produces machine unit P1 for 1 hour, and the unit P2 - 2 hours. S2 machine spends unit P1 - 2 hours, and per unit of product P2 - 1:00. S1 machine can work in a day not more than 10 h., And machine S2 - no more than 8 hours. Unit cost of P1 is C1 dollars, and the cost per unit of output P2 is C2 dollars.
We construct a mathematical model of the problem:
We denote by x1 and x2 the number of products P1 and P2, which is expected to produce on each individual machine. The value of output F=c1x1+c2x2. We shall designate x1 and x2 such that F has the maximum value. Variables x1, x2 can not take arbitrary values. Their values are limited by the conditions of production, namely the fact that the machine can work for a limited time. For the manufacture of products P1 machine S1 1x1 spends hours and for the manufacture of products P2 - 2x2 hours. Since the time of the machine S1 does not exceed 10 hours, the values x1 and x2 must satisfy the inequality x1 + 2x2 <= 10.
Similarly, we can obtain the inequality for the machine S2: 2x1+x2 <= 8.
Furthermore, the quantities x1, x2 cannot be negative: x1≥0, x2≥0, in the sense of the problem.
Such kind of problems is briefly written as follows:
x1+2x2≤102x2+x1≤8x1≥0, x2≥0F=c1x1+c2x2→max
Thus, the mathematical model of the problem: to find a plan output X=(x1,x2) satisfying the system above.
In other situations, there may be a problem with a large number of variables in the system of restrictions which, in addition to inequalities, can enter and equality. That’s why in the most general form of the linear programming problem is formulated as follows:
a11x1+a12x2++a1nxn≥, ≤,=b1a21x1+a22x2++a2nxn≥, ≤,=b2an1x1+an2x2++annxn≥, ≤,=bnx1≥0, x2≥0,,xn≥0F=c1x1+c2x2++cnxn→max(min)
Non-Linear Programming
Nonlinear programming (NLP) is a case of mathematical programming, in which the objective function or constraint is non-linear function. Nonlinear programming problem is formulated as a problem of finding the optimum certain objective functionF(x1,,xn), under the conditions
gjx1,,xn≥0
where xi, i=1,,n - parameters, gj, j=1,,s - constraints, n is the number of parameters, s is number of constraints.
In contrast to the linear programming problem, a nonlinear programming problem is not necessarily the optimum lies on the boundary of the domain, certain limitations.
One of the techniques that allow us to reduce the problem of non-linear programming to solve the system of equations is the method of Lagrange multipliers.
If the objective function F is linear and limited space is a polytope, then the problem is a linear programming problem which can be solved with the help of well-known solutions of linear programming.
If the objective function is concave (maximization problem) or convex (minimization problem), and the set of constraints is convex, then the problem is called convex, and in most cases can be used by the general methods of convex optimization.
If the objective function is the ratio of concave and convex functions (for maximization) and constraints are convex, then the problem can be transformed into a convex optimization problem using the technique of fractional programming.
There are several methods for solving nonconvex problems. One approach is to use special formulations of linear programming problems. Another method involves the use of branch and bound method, where the task is divided into subclasses to be solved with convex (minimization problem) or linear approximation, which form the lower limit of the total cost within the section. In the following sections at some point will get the actual solution, the cost of which equals the best lower bound obtained for any of the approximate solutions. This solution is optimal, although it may not be unique. The algorithm can be stopped at an early stage, with the assurance that the best solution is within the allowable deviation from the best point found; Such points are called ε-optimal. Completion of ε-optimal points is usually required to provide a final conclusion. This is especially useful for large, complex problems and problems with uncertain costs or values where uncertainty can be determined from the corresponding estimate of reliability.
Differentiation and regularity conditions, the conditions of Karush - Kuhn - Tucker (KKT) optimality conditions provide the necessary solutions. With convexity these conditions are also sufficient.
Methods of nonlinear programming are used to solve optimization problems with nonlinear objective function. On the independent variables may be restrictions in the form of non-linear relationships, which have the form of equations or inequalities. Essentially, nonlinear programming methods are used if none of the above methods do not allow any progress in solving the optimization problem. Therefore, these methods are sometimes referred to as direct methods for solving optimization problems.
Called "methods of nonlinear programming" brings together a large group of numerical methods, many of which are adapted for solving optimization problems of the corresponding class. The choice of method is due to the complexity of calculating the criterion of optimality and complexity constraints, the required accuracy solutions, available computer power, etc. A number of nonlinear programming techniques almost always used in conjunction with other methods of optimization, such as the scan method in the dynamic programming. In addition, these methods are the basis for the construction of systems of automatic optimization - SEO, directly applied to industrial process control.
Example
There are three types of securities on the financial market. The effectiveness (the average expected return per security) of each security is given:
M=m1,m2, m3=(5,10,15)
Also, the covariance matrix is:
U=212133234
The purpose is to develop a portfolio with a minimum risk, which has a targeted effectiveness of:
mp=12
Solve the following problem using the method of Lagrange multipliers. Develop Lagrange function:
F=2x12+3x22+4x32+2x1x2+4x1x3+6x2x3+λ112-5x1-10x2-15x3++λ21-x1-x2-x3
Find partial derivatives by each variable:
dFdx1=x2-constx3-constλ1-constλ2-const=4x1+2x2+4x3-5λ1-λ2;
dFdx2=x1-constx3-constλ1-constλ2-const=6x2+2x1+6x3-10λ1-λ2
dFdx3=x1-constx2-constλ1-constλ2-const=8x3+4x1+6x2-15λ1-λ2
dFdλ1=12-5x1-10x2-15x3dFdλ2=1-x1-x2-x3
Equate them to 0 and obtain the system of equations:
4x1+2x2+4x3-5λ1-λ2=02x1+6x2+6x3-10λ1-λ2=04x1+6x2+8x3-15λ1-λ2=05x1+10x2+15x3=12x1+x2+x3=1
Solve the system and obtain:
x1x2,x3,λ1,λ2=0.2,0.2,0.6,0.32,2
Hence, the extremum of the function is:
x1,x2,x3=0.2,0.2,0.6zx1,x2,x3=z0.2,0.2,0.6=2.92
- APA
- MLA
- Harvard
- Vancouver
- Chicago
- ASA
- IEEE
- AMA