**Prof. Ms. Manisha A. Bhusa**

Email: manishabhusa@gmail.com

*Department of Computer Science & Engineering,*

*M.B.E. Society’s College of Engineering, Ambajogai Pin: 431517, Dist: Beed, Maharashtra*

**Abstract:**

*Knapsack problems are considered the simplest among NP-hard problems and are some of the intensively studied combinatorial optimization problems. The knapsack problem considers a set of items, each having an associated cost and a weight. The problem is to choose a subset of the given items such that the corresponding total cost is minimized while the total weight satisfies the specified demand. Major applications of knapsack problem are: Problems in cargo loading, cutting stock, bin-packing, budget control and financial management may be formulated as Multiple Knapsack problems. MKP can be seen as a general model for any kind of binary problems with positive coefficients. The MKP can be thought as a resource allocation problem.*

**Keywords: Knapsack, greedy technique, Dynamic Programming, Resource Allocation**

## 1. INTRODUCTION

Some of the well studied variants of the knapsack problems [2, 3] are bounded/unbounded knapsack problems, subset-sum problems, change-making problems, multiple knapsack problems, multiple choice knapsack problems, and bin packing problems. The applications of these problems vary from industrial applications and financial management to personal health care. The common flavor in most of these problems is resource allocation. The allocation of a specific amount of a single resource among competitive alternatives is often modeled as a knapsack problem or its variants. In nonlinear knapsack problems there are multiple units of each item and the cost of inclusion of an item varies nonlinearly with the number of units[5].

In Knapsack problem n objects & a knapsack are given. Object i has a weight wi & Knapsack capacity = m. If a fraction xi, where 0<=xi <=1, of object i is placed into the knapsack, then a profit of Pixi is earned. The objective is to obtain a filling of the knapsack that maximizes the total profit earned. The total weight of all chosen objects to be at most m. The problem can be stated as:

maximize ∑ Pi xi à1

i=1 to n

subject to ∑ Wi xi <=m à2

i=1 to n

and 0<=xi <=1, 1<=i<=n à3

Here Wi’s & Pi’s are positive numbers. A Feasible solution is any set (x1, x2, — , xn) satisfying equation 2 & 3. An Optimal solution is a feasible solution for which equation 1 is maximized.

Knapsack problem calls for selecting a subset of objects and hence fits the subset paradigm. In addition the knapsack problem also involves the selection of xi for each object.

## 2. SPONSORED SEARCH

Several recent papers have considered applications of the knapsack problem to auction design [12]. Knapsack algorithms have also been used to design bidding strategies for budget-constrained advertisers in sponsored search auctions [13, 15, 16]. The algorithms they develop are inspired by offline knapsack problems. Recently [17] modeled the bidding problem using online knapsack.

## 3. PROBLEM FORMULATION

Knapsack problem can be solved by Greedy technique, Dynamic Programming, backtracking and Branch and Bound Techniques.

**3.1. Greedy Technique:**

Several simple greedy strategies to obtain simple feasible solutions whose sums are identical to m suggest themselves.

- Try to fill the knapsack by including next object with largest profit. If an object under consideration doesn’t fit, then a fraction of it is included to fill the knapsack. Thus each time an object is included into the knapsack, one obtain the largest possible increase in profit value. If only a fraction of the last object is included, then it may be possible to get a bigger increase by using a different object. Here optimal solution is not obtained.
- Try to be greedy with capacity and use it up as slowly as possible. This requires one to consider the objects in order of nondecreasing weights. This too is suboptimal.
- Use algorithm that strives to achieve a balance between the rate at which profit increases and the rate which capacity is used. At each step include that object which has the maximum profit per unit of capacity used. Thus objects are considered in order of the ratio Pi/Wi. If the objects have been already sorted into nonincreasing order of Pi/Wi, then the function GreedyKnapsack() obtains solutions corresponding to this strategy [1].

Disregarding the time to initially sort the objects, each of the three strategies outlined requires only O(n) time.

**3 .2. 0/1 Knapsack using Dynamic Programming:**

The 0/1 knapsack problem is similar to the knapsack problem using Greedy technique except that the xi’s are restricted to have a value either 0 or 1. Using KNAP(l, j, y) to represent the problem

maximize ∑ Pi xi i=l to j

subject to ∑ Wi xi <=y

i=l to j

and xi = 0 or 1, l<= i <= j à4

the knapsack problem is KNAP(1, n, m). Let y1, y2, — , yn be an optimal sequence of 0/1 values for x1, x2, —, xn, respectively. If y1=0, then y2, y3, —, yn must constitute an optimal sequence for the problem KNAP(2, n, m). If it does not, then y1, y2, —, yn is not an optimal sequence for KNAP(1, n, m). If y1=1, then y2, —yn must be an optimal sequence for the problem KNAP(2, n, m-w1). If it does not, then there is another 0/1 sequence z2, z3, —, zn such that

∑ Wi zi <=m – w1

i=2 to n

and ∑ Pi zi > ∑ Pi yi

i=2 to n i=2 to n

Hence, the sequence y1, z2, z3, —, zn is a sequence for equation 4 with greater value.

Principle of Optimality hold: Let S0 be the initial problem state. Assume that n decisions di, 1<=i<=n, have to be made. Let D1 = {r1, r2, — , rj} be the set of possible decision values for d1. Let Si be the problem state following the choice of decision ri, 1<=i<=j. Let Ti be an optimal sequence of decisions with respect to the problem state Si. Then, when the principle of optimality holds, an optimal sequence of decisions with respect to S0 is the best of the decision sequences ri, Ti, 1<=i<=j.

Let gj(y) be the value of an optimal solution to KNAP(j+1, n, y). g0(m) is the value of an optimal solution to KNAP(1, n, m). The possible decisions for x1 are 0 and 1(D1 = {0, 1}). From the principle of optimality it follows that

g0(m)=max{g1(m),g1(m–W1)+P1} à 5

Let y1, y2, —, yn be an optimal solution to KNAP(1, n, m). Then, for each j, 1<=j<=n, y1, —, yj, and yj+1, —, yn must be an optimal solutions to the problems KNAP(1, j, ∑ Wi yi ), i=1 to j and KNAP(j+1, n, m – ∑ Wi yi ) respectively.

This observation allows generalizing equation 5 to

Gi(y)=max{gi+1(y),gi+1(y-Wi+1)+Pi+1à6

The recursive application of the optimality principle results in a recurrence equation of type 6. Dynamic programming algorithms solve this recurrence to obtain a solution to the given problem instance. The recurrence equation 6 can be solved using the knowledge gn(y) = 0 for all y>=0 and gn(y) = – ∞ for y<0. From gn(y), one can obtain gn-1(y) using equation 6 with i=n-1. Then, using gn-1(y), one can obtain gn-2(y). Repeating in this way, one can determine g1(y) and finally g0(m) using equation 6 with i=0.

Looking backward on the sequence of decisions x1, x2, —, xn,

fj(y)=max{fj-1(y), fj-1(y-Wj)+Pj} à7

where fj(y) is the value of an optimal solution to KNAP(1, j, y).

The value of an optimal solution to KNAP(1, n, m) is fn(m). Equation 7 can be solved by beginning with f0(y) = 0 for all y, y>=0, and f0(y) = – ∞, for all y, y<0. From this f1, f2, —, fn can be successively obtained.

The algorithm for knapsack using Dynamic programming is called DKnap(). When both the Pj’s and Wj’s are integers, the time and space complexity is O(min{2^{n}, n ∑Pi, nm}), 1<=i<=n.

## 4. APPLICATIONS

Knapsack problem is one of the famous challenges in combinatorial optimization. The Knapsack Problem has numerous applications in theory as well as in practice. It also arise as a sub problem in several algorithms for more complex Combinatorial Optimization Problems (COPs) and these algorithms will benefit from any improvement in the field of Multiple Knapsack problem (MKP). In [8] is proposed to use the MKP in fault tolerance problem and in [9] is designed a public cryptography scheme whose security realize on the difficulty of solving the MKP. Martello and Toth [10] mentioned that two processor scheduling problems may be solved as a MKP. Other applications are industrial management, naval, aerospace, computational complexity theory. The shortest path problem in a transportation network deals with determining the subset of the connected roads that collectively comprise the shortest driving distance or the smallest driving time or the cheapest fair between two cities. MKP can be seen as a general model for any kind of binary problems with positive coefficients [11]. The MKP can be as a resource allocation problem, where we have *m *resources (the knapsacks) and *n *objects and object *j *has a profit *j p*. Each resource has its own budget *i c *(knapsack capacity) and consumption *ij r *of resource *i *by object *j *. Aim is to maximizing the sum of the profits, while working with a limited budget.

## 5. CONCLUSION

Knapsack problem can be solved by Greedy technique, Dynamic Programming, backtracking. One of the popular examples of the knapsack problem is resource allocation. Resource allocation is used in various fields of engineering. This problem can be solved either using single or multiple knapsack. To improve the complexity one can combine soft computing and 0-1 Knapsack problem either using Greedy technique or Dynamic Programming to obtain optimal results.

## 6. REFERENCES

- Crina Groşan, Mihai Oltean, D. Dumitresc, “A NEW EVOLUTIONARY ALGORITHM FOR THE MULTIOBJECTIVE 0/1 KNAPSACK PROBLEM”, ICTAMI 2003, Alba Iulia, 233
- D. Pisinger and P. Toth. Knapsack problems. In Ding-Zhu Du and P. M. Pardalos, editors, Handbook of Combinatorial Optimization, pages 299–428, Kluwer Academic Publications, 1998.
- S. Martello and P. Toth. Knapsack Problems, “Algorithms and Computer Implementation”. John Wiley and Sons, 1990.
- H. Kellerer, U. Pferschy, and D. Pisinger, “Knapsack Problems”, Springer, 2004.
- S Kameshwaran,” Algorithms for Piecewise Linear Knapsack Problems with Applications in Electronic Commerce”, PhD Dissertation, Indian Institute of Science, Bangalore,2004
- S. Cui, J. jun Xiao, A. J. Goldsmith, Z. Q. Luo, and H. V. Poor, “Estimation diversity and energy efficiency in distributed sensing,”
*IEEE Transactions on Signal Processing*, vol. 55, 2007. - Stefka Fidanova, “HEURISTICS FOR MULTIPLE KNAPSACK PROBLEM”, ISBN: 972-99353-6-X © 2005 IADIS
- Sinha A., Zoltner A. A., 1979, The multiple-choice knapsack problem,
*J. of Operational Research,*27, pp. 503-515. - Diffe W., Hellman M. E.:1976, “New direction in cryptography”,
*IEEE Trans, Inf. Theory*, IT-36, pp. 644-654. - Martello S., Toth P., 1984, “A mixtures of dynamic programming and branch-and-bound for the subset-sum problem”, Management Science, 30, pp. 765-771.
- Kochemberger G., McCarl G., Wymann F., 1974, “Heuristic for general inter programming”,
*J. of Decision Sciences*5, pp. 34-44. - G. Aggarwal and J.,” Hartline. Knapsack auctions”. In SODA, pages 1083–1092, 2006.
- C. Borgs, J. Chayes, O. Etesami, N. Immorlica, K. Jain, and M. Mahdian,” Dynamics of bid optimization in online advertisement auctions”. In Proceedings of the 16th International WWW Conference, 2007.
- P. Rusmevichientong and D.P. Williamson. An adaptive algorithm for selecting profitable keywords for search-based advertising services. In Proceedings of the 7
^{th}ACM Conference on Electronic Commerce, 2006. - J. Feldman, S. Muthukrishnan, M. Pal, and C. Stein. Budget optimization in search-based advertising auctions. In Proceedings of the 8th ACM Conference on Electronic Commerce, 2007.
- S. Muthukrishnan, M. Pal, and Z. Svitkina. Stochastic models for budget optimization in search-based. manuscript, 2007. 17. D. Chakrabarty, Y. Zhou, and R. Lukose. Budget constrained bidding in keyword auctions and online knapsack problems. In WWW 2007, Workshop on Sponsored Search Auctions, 2007.
- D. Chakrabarty, Y. Zhou, and R. Lukose. Budget constrained bidding in keyword auctions and online knapsack problems. In WWW 2007, Workshop on Sponsored Search Auctions, 2007.