Received 6 May 2016; accepted 18 May 2016; published 19 August 2016
Cloud computing is raised in its popularity nowadays. Data center providers, due to their competition, have started many new user applications and expanded in wide range. Such applications are catalogued in the data center’s physical machines or virtual machines. Virtualization is the method of running multiple independent virtual operating systems on a single physical computer. This approach maximizes the return on investment for the computer. The management and the creation of the virtual machines have often been called platform virtualization. The software applications can be shared using the virtual machines with minimum sharing of the resources. These kinds of hardware isolation lead to the optimal resource utilization.
The cloud can offer customers the flexibility to specify the exact amount of applications, computing power or data they need to satisfy their business requirements. A big desire for their desire to use the cloud is the availability of collaborative services. Perhaps because of the great flexibility of open source, which facilitates the efforts of vendors, large commercial users, cloud implementers most of all, the successful applications of open source have evolved from within consortia.
1.1. Server Consolidation
The above described server consolidation is an accession to an effective usage of the resources of the cloud server. It reduces the total number of the servers/server locations. Here this identified problem server sprawl is responded by the server consolidation process. Server sprawl is a situation where underutilized servers take more space and consume more resources. Figure 1 explains the server consolidation process.
1.2. Residual Resource Fragmentation
In the cloud computing, the allocation of the resource (RA) is the process of allocating the resources which are available for the needed cloud applications through the internet. If the allocation is not managed properly, the resource allocation declined the services. Resource provisioning clarifies that problem by allowing the cloud service providers to manage the resources for each individual phase. Resource allocations strategy integrates cloud provider activities for allocating resources within the cloud environment. Optimal resource allocation stra- tegies should avoid the following pattern:
Resource Fragmentation: Situation appears when the resources are segregated.
Resource dissent: Situation appears when two applications try to use the same resource at the time.
Scarcity of resources: appears when there are limited resources.
Under Provisioning: It occurs when the application is allotted with fewer numbers of resources than the demand.
Over Provisioning: It appears when the application gets surplus resources than the demanded one.
Resource allocation strategy can be categorized as follows  , Execution Policy-Matchmaking, VM-Load, cost, speed, type, Policy-Security, Processor, Utility-Response time, Profit, Application satisfaction, Gossip-Peer information, Peer resources, Expert knowledge, Auction-Market Bid, H/W resource dependency-Storage, CPU, I/O, Communication, SLA-Response time, throughput, QoS. Application-Large scale, Data intensive, real time, Shared DB. The resource allocation should satisfy the following parameters, a) throughput b) latency c) response time. Even the cloud service providers are reliable they offer a crucial problem in dynamic resource allocation. To overcome this, input parameters are needed as shown in following Table 1.
Figure 1. Server consolidation.
The contributions of the paper are as follows:
1. Firstly the active physical servers are optimally found with the help of cuckoo search algorithm.
2. Then the data defragmentation process is carried out with some equations.
3. It finally allocates the resources with the help of the Enhanced Cloud Resource Consolidation algorithm.
1.3. Organization of the Paper
The remaining part of this paper is organized as follows: Section 2 describes the related work. Section 3 describes the problem of residual resource fragmentation and its consequences. Section 4 describes the solution methodology. Section 5 describes the experimental setup. Section 6 discusses the results and analysis. Section 7 provides the conclusion analysis and the future scope of the work.
1.4. Related Work
In this paper we discuss about the problem of reducing active PMs. While considering the server consolidation problem many of the earlier authors in their work they have projected the consequences of their server consolidation process  . In sercon algorithm  the metrics identified are cpu and memory load capacity. In the Entropy  algorithm the no op migrations are identified and the resources considered are the cpu and memory. In the Miyako  algorithm the metrics used are the dirty page bit. Here the resource considered is the memory. In sandpiper  algorithm they set hotspot as the goal and the resources identified are CPU, memory and the network. In memory buddies  identified the resource considered as the memory. In khanna’s algorithm  server consolidation is considered as goal. Residual capacity is identified as the metrics. The above explanation is given in the following Table 2.
This problem is technically identified as Server sprawl. In the state of art for server consolidation different approximation approaches have been proposed using various techniques. Many of the inquiring techniques such as First fit and best fit decreasing  , Harmonic and Cardinality Constrained Harmonic approach  , Improvised First Fit Decreasing  , Modified Best Fit Decreasing  , Sercon  are the applications of the most widely used bin packing algorithms. Some of the approaches like Dynamic Round Robin Approach  , Adaptive Threshold based approach  , Control Theoretic solution  , pMapper  , 2-Phase Optimization Method  , Genetic Algorithm Based Approach  afford different methods and techniques for reducing energy consumption and reducing power methods such as LP formulation.
Table 1. Parameter inputs in resource allocation.
Table 2. Algorithms for server consolidation.
2. Problem Statement
In existing systems, residual resource fragmentation is not much concentrated. Some papers concentrate on reducing the physical servers which leads to lower power consumption. Still when residual resources are concentrated and the resources are reused means it gives much cost reduction. Residual Resource Fragmentation refers to the state of the data center where sufficient amount of residual resources are available for any new VM allocation or VM reallocation, but are fragmented and distributed across multiple active PMs, rendering them unusable. The process of reducing residual resource fragmentation is called Residual Resource Defragmentation. Improving resource utilization only, it is not possible to improve utilization of CPU, Memory and Network Bandwidth simultaneously using a single scalar score. If a scalar score is used to decide on the destination PM, it leads to fragmentation of residual resources. The problem of resource fragmentation renders the residual resources useless or less useful, thereby adding to the cost incurred to the data center provider. While reducing the residual resource fragmentation in each consolidation interval, thereby reducing migrations that may be required for dynamic resource consolidating. So, here we will propose a dynamic resource consolidating and allocation technique with optimal residual resource fragmentation process.
3. Solution Methodology
The proposed method comprises of 3 phases.
a) Initially, active physical servers are selected by optimization algorithm
i.e., optimal servers can be selected by employing binary cuckoo search algorithm. The cost will be the fitness to select the optimal servers. Here the active Physical machines are identified using the Binary cuckoo search algorithm. Binary cuckoo search algorithm is the binary version of the cuckoo search algorithm. It is proposed by Nozarian, where CS algorithm is developed by the Xin-she yang in 2009. For simplicity in describing Cuckoo Search, three rules are executed: 1) Each cuckoo lays one egg at a time, and dump its egg in randomly chosen nest; 2) The best nests with high quality of eggs will carry over to the next generations; 3) The number of available host nests is fixed, and the egg laid by a cuckoo is discovered by the host bird with a probability pa ∈ [0, 1].
Let rc(i), rm(i) and rn(i) be the CPU requirement, Memory requirement & Network Bandwidth requirement of VM v(i), where i = 1, 2, 3…n. Let Rc, Rm, Rn be the thresholds for CPU utilization, Memory utilization and Network Bandwidth utilization respectively in the PMs. Let the current state of the PMs be represented by N(j), the set of VMs present in the jth PM, where  . The problem definition is as follows. To find the threshold N(j), the set of VMs consolidated into a PM p(j),where after performing server consolidation the following Equations (1)-(3)should be solved.
b) After that, the maximum utilization of resources on these optimal servers is calculated.
Here the resources are categorized based on the type and allocated to suitable physical servers or virtual machines which will reduce the data defragmentation in data centers. Here the maximum utilization of the resources is identified. The VMs which are capable of migrating can be found out and the migration process takes place. Here the VMs and PMs Migration can be carried out only if the following conditions are satisfied:
The PM from which the set of VMs is chosen should have all 3 resource utilization less than or equal to the corresponding resource utilizations in the candidate PM.
The exchange of VMs between the PMs should improve the residual resource defragmentation in the two PMs.
The conditions are given in the following Table 3.
Here the Central Processing unit, Memory and the Network bandwidth are playing the main roles. Here the above six conditions covers the all satisfactory conditions for the resource utility. The defragmentation process is carried out here with the following Equation (4).
where M: the number of active PMs,
Tji: The threshold for the jth resource,
Cij : The current utilization of the jth resource in the ith P.
Higher the value of the defragmt, the lesser is the resource fragmentation. Thus the value of the defragment matter for the residual resource defragmentation. Thus for the concluded defragmentation value, it can be calculated with the formula of consolidation of the defragmented values. i.e. Total defragmt T = defragmt CPU+ defragmt MEM + defragmt BW.
Thus in the second phase the residual resources are defragmented. The thickness is calculated here by on the basis of the cost and memory. In this part the server consolidation takes place and the defragmentation process is taking place. Now the task is ready for the allocation process.
c) Finally, the resources are allocated and scheduled using Enhanced cloud resource consolidating (ECRC).
Maximum utilization of resources on these optimal servers is calculated in the second phase. The resources are allocated and scheduled using enhanced cloud resource consolidating (ECRC). Here the traditional cloud resource consolidating algorithm is enhanced. In the paper  , the author explained the ant colony algorithm for the easiest path finding technique. Levy’s Flight algorithm  was explained the random walk method. It is useful in the efficient routing in the network.
In ECRC algorithm the data defragmentation size will be the fitness to select the resource corresponding to the user task. The server consolidation process should carry out the residual resource defragmentation so that the left over resources will be usable to the maximum extent. So we have to select the resource based on the user’s requirement. If the available resource in the cloud is high when compared to user requirement we have to fix one score value based on the score value the fitness for firefly algorithm is fixed. Based on the optimal value we defragment the data for user requirement. The ECRC algorithm workflow is given below: (Figure 2)
Figure 2. Workflow for the ECRC algorithm.
Table 3. Conditions for PM migrations.
Workflow of ECRC Algorithm: (Figure 2)
Using the ECRC algorithm, the better nodes are found out. From the better nodes, top of the table node is found according to the user’s task requirement. If two or more nodes found out, then the thickness is calculated with the help of the cost. Thus by using this technique the server sprawl problem can face new changes. In multiple provisioning stages as well as a long-term plan, the ECRC algorithm can provision the resources is being used.
The implementation will be done in cloudsim along with java. The performance of the proposed system will be compared with the existing system.
4. Experimental Results
4.1. Experimental Study 2 (Data Defragmentation)
Figure 3 shows that the residual resource fragmentation for the server consolidation with the comparison of the existing consolidation algorithm OVMP and OCRP approaches with our work ECRC algorithm. The datasets for the residual resource defragmentation is listed below in Table 4. It is noted that in each and every consolidation interval, residual resource defragmentation is higher for ECRC while compared to the OVMP and OCRP algorithm. Thus it is concluded that the proposed ECRC algorithm gives better result. This is because higher the defragment lesser the resource fragmentation.
Table 4. The sample datasets for the defragmentation.
4.2. Experimental Study 2 (Energy Consumption)
From Figure 4, the cost reduction can be illustrated. Table 5 explains the dataset for the data center energy consumption. Optimal virtual machine placement, Optimized cloud resource provisioning, and Enhanced cloud resource consolidating approaches considering different workloads. From Figure 4 we can infer that the energy consumptions in the three approaches are almost same. Hence the proposed approach ECRC succeeds in effectively reducing the energy consumption of the data center and thereby reduces overall costs comparatively with OVMP and OCRP.
Figure 3. Residual resource defragmentation at different consolidation intervals.
Figure 4. Data center energy consumption for different consolidation algorithms.
Table 5. Sample data sets for the data center energy consumption.
In Cloud computing, server sprawl is one of the rarely identified problems. Here the server consolidation plays a vital role.
By using binary cuckoo search, the active servers are identified. By using enhanced cloud resource consolidating algorithm, the exact match of the particular task can be identified. From the top identified nodes, the best fit for our task is identified. Here, defragmentation assigns the thickness of the values. Here the parameter used for identifying the best node is the cost. Binary cuckoo search is used here for the grouping the task and for searching purpose. ECRC algorithm used here can find the appropriate top 5 match node for our task. This work can be mainly held using the ECRC technique. In this technique, the best node from the first five nodes which is selected using the ECRC algorithm can be listed to the user. While comparing with the existing server allocation techniques our ECRC technique give more cost fitness. By using ECRC, the results vary from the existing Residual resource allocation techniques. The algorithm used here are binary cuckoo search and ECRC. This is because the survey conducted from the various author’s works. There are various parameters to calculate the output. Here the memory allocation and the cost are used.