There is growing interest in improving the resources utilization, such as memory and processor time efficiency of large scale enterprise applications  . A system resource can be defined as a provider of a set of capabilities and they entail a network, hardware (memory) and software  that are needed to execute any transaction initiated in and by the application. “  ” notes that memory usage (as a resource) is perhaps the most important factor in system performance. In support  argues that processor time and memory resources have such a significant influence on the operation of any software application, it is therefore important to understand how these applications utilize them. Continued expansion of software applications represents the need for more of such resources  .
Resources should be distributed among the different application transactions in such a way that there should not be starvation  . Monitoring their utilization helps software engineers know how applications make demands on the resources and how those resources respond to each job or transaction  . This is what resource management is about; monitoring the availability of system resources, allocating the resources and provisioning of the resources  as demand arises. In this regard, interest is in controlling how capabilities provided by system resources and services are made available to other entities, whether users, applications, services. For improper resource management is susceptible to the underutilized and wastage of resources. The net result on user application experience is poor service delivery in these applications and data centers.  mentions that it is desirable to not just avoid wasting resources as a result of underutilization, but to avoid lengthy response times as a result of over-utilization.
There are many ways to provide resource management controls proposed by various researchers, we categorize them as: 1) resource management Middleware techniques; 2) Prediction-based resource allocation techniques   ; 3) Web services; and 4) Virtualization Technology. However, our study proposes a self- adaptive and prediction-based resource allocation mainly because they allow for resource allocation at an application level. The operational environment of the used is a disease surveillance environment. However, it is important to note that there is no single technique or method which provides the complete solution for transaction and resource management challenges in dynamic operational environment in which these disease surveillance applications are deployed. Even so, this does not negate the fact that software developers are trying to analyze application operation environment using diverse analysis and design methods  . Methods here include the different ways of monitoring a utilization amount of resources within a server and identifying a resource-strained server to avoid an activation of capacity upgrade on demand (CUoD)  .
2.1. Resource-Intensive Applications
Modern resource-intensive enterprise and scientific applications are creating a growing demand for high performance computing infrastructures  . These applications constantly interact with and rely heavily on complex resources and are all stretching the limits of the organization’s existing resources  . Such applications are characterized by complex processes  that sometimes need to run for extended periods of time  .
Resource intensive applications can be either data-intensive or compute-in- tensive applications  . Data-intensive applications handle massive amounts of data commonly referred to as “big data”. Another category is the user-intensive applications, an aspect introduced by  . Such include web applications that must satisfy the interaction requirements of thousands if not millions of users, which can be hardly fully understood at design time. Both data-intensive and user-intensive software applications could be considered as compute-intensive applications. This is because they often require continuously increasing power of computing resources and storage volume that are in many cases required on- demand for specific operations in data lifecycle  . This therefore calls for a capacity planning approach as suggested by  in most organizations that are not in a position to purchase additional computational resources each time need arises due to financial constrains.
2.2. Transaction Processing in Resource-Intensive Applications
Transaction processing is one of the essential features of such an enterprise application, since they perform computationally intensive work   . For the transaction processing to happen efficiently and effectively, considerable amounts of resources are required to complete execution in a reasonable timeframe  . Each time a transaction is initiated it creates the demand for a resource  and there are instances where two or more workloads can place a demand for the resources at the same time. We deduce that for the extended time such a workload is running it will be assigned available resources and will consume the resources until its execution is complete which could lead to other applications being denied or put on hold (waiting) since the resource is committed elsewhere. The resources must therefore be sliced and shared between machines running potentially heterogeneous workloads   , consisting of many short jobs (transactions) and a small number of large jobs (transactions) that consume the bulk of the available resources, without incurring extra resource and performance costs   . That is why it is important to determine how best to maximize the performance of the system even in such circumstances. It is also important that software applications must have the ability to self-adapt to meet changes in such execution environment and yet still maintain expected qualities-of-service  .
3. Design Concerns and Methods
3.1. Design Concerns for Resource-Intensive Applications
Insufficient attention to workloads and the interactions of the applications with the available (limited) resources at the application design time can reduce the quality and effectiveness of the software  , yet there is an increasing demand, from the users, for software products with increasing quality  . We infer that for any application design, it is important to understand the management of the application transactional processes and its implication within its operational environment. Accordingly, scheduling of transactions and management of their execution time are important performance requirements of any enterprise application   .
Designing for such situations is a challenging task because resource management in resource limited/constrained environment, in itself, is a hard problem due to the following: the large scale nature and complexity of such applications     ; the heterogeneity of resource types and their interdependencies; the variability and unpredictability of the load   . Failure to interpret different complexities that can arise in the course of an application development within the planning process can result in huge bottlenecks. This therefore means that a lot of attention should go to the planning of the architecture   , and that is where the design comes in.
The underlying goal of any design process or decisions that are to be incorporated in the final architecture is ensuring that quality attribute are achieved  since these attributes are used as a bridge to connect business goals and software architectures  . In planning the components and how they will relate, it is important work out how the different categories of the heterogeneous workloads will be processed  and that is where the method comes in. There is need to assign available local resources for these applications which is capable of executing on it  , but in order to improve resource utility, resources must be properly allocated and load balancing must be guaranteed  . It is also important to note that the resource demand of an application can change over time  and a software application designer needs to factor in such changes in the design plan. The analysis drawn from this is that resource management in transaction processing and management is an important design decision and concern in the software engineering discipline.
3.2. Design Methods for Resource-Intensive Applications
Define There are different techniques are being used to target specific design and implementation issues in the dynamic environment. “  ” argues that there is always a trade-off in terms of performance, availability, consistency, concurrency, scalability and elasticity. Virtualization focuses heavily on the physical level of resource allocation for the transactions  , while web services, such as Elastic Compute Cloud (EC2) adopted by  , focuses on involving the users by giving them a chance to be able to monitor the allocation of resources and applications and also monitor the custom metrics generated by a customer’s applications and service, an aspect that will not be covered in this work. “   ” Present adaptive resource management middleware techniques that achieve the QoS requirements of the system.
The middleware performs QoS monitoring and failure detection, QoS diagnosis, and reallocation of resources to adapt the system to achieve acceptable levels of QoS; for an application to be able to adaptive applications needs to be self- aware, resource-aware and also context aware  . This means it should be able to understand its current environment of operation and the current circumstances, and the resources available  . Prediction and Forecasting should be important components of many real-world enterprise applications  . Such capabilities help in reducing the risk of making wrong decisions while allocating resources to transactions  . Our aim therefore, is to be able to work with a hybrid of adaptive and prediction-based resource management and allocation models to try and solve the application resource management challenges currently being faced in such environments, even if, partially. The resulting tabulation is an on demand plan (type of resources provision plan based on needs of each transaction or workload)  identified using the hybrid model.
The on demand plan, such as the one shown in Table 1, helps ensure that the software engineers are fully aware of the types of jobs to be executed and the resource demand for each transaction during the design of the application and the modules for the application.
3.3. Workload Classification and Execution
Priority based job scheduling algorithm proposed by   to help us identify the level of priority of each job. “  ” mentions that in a workload there are some jobs that are executed prior to others depending on the nature of the job. Disease surveillance has the following components that are used to classify task: case detection, data recording, data compilation, and data transmission. Transactions from disease surveillance applications can be within any of these components and there must be a provision that all resources are made available to requesting users in efficient manner to satisfy their needs. But the question is how? Of the components mentioned which one should be executed before the other?
For example, the graph in Figure 1 indicates that the most urgent task is given the available shortest execution time, while the lease urgent task will be executed within the last 3.5 time block. It is therefore important have a criteria that can
Table 1. Sample on demand plan.
Figure 1. Sample job mapping based on urgency of output.
be used to initiate the tasks and then executing the tasks when the criteria is met, a process known as task scheduling; a vital part that assigns tasks to suitable resources for execution. “  ” proposes classification criteria that entail the following: Performance goals, Resource requirements and Business importance.
This approach is important in task classification where the concern is in adding value to a business process as shown in Table 2. For example in a disease surveillance environment, based on the output it will be improvement in patient monitoring where high risk conditions. However, it leaves out the element of understanding the nature of the task or workload and how that will contribute to the overall performance of an application; yet this study infers that if an application is “heavy” dealing with large quantities of data it will consume more resources and will in turn affect the performance of the application.
Table 2. Sample task classification.
“  ” proposes an approach that considers the nature and characteristics of the job. They mention that the following three characteristics can be used: Volume: This refers to the quantity of data which is generated and it determines the size only; Variety: This specifies the category to which the data belong for example in a disease surveillance environment the categories are (case identification, case reporting etc); and Velocity: It specifies the speed at which the data is generated for example social media data comes in fast while hospital data comes in on either a weekly or monthly basis. The challenge with using this approach is that it is fully focused on the job leaving out other important things that need to be considered such as the available resources. “  ” describes an approach to workload classifications based on task resource consumption needs and patterns; such as the time needed to execute the job (short or long duration), CPU and memory demand. However, the challenge with this approach according to is that it does not perform intra-cluster analysis to derive a detailed workload model and it also neglects the user patterns which are as important as the tasks in the overall workload model.
Achieving efficient resource allocation is one of the most challenging problems. But the question is how to choose a method that will give efficient resource allocation and management. This study has proposed a self adaptive resource aware approach which will give a better application capacity planning view to the software engineers during the design of the applications. It is important that software engineers know how applications make demands on the resources and how those resources respond to each job or transaction considering the fact that the computational resources are scarce. A priority based scheduling approach will help in ensuring that jobs which are considered most urgent, especially where disease surveillance is concerned, are given the priority that they deserve in terms of execution. This is because each job will be classified based on its nature and business value. This however does not mean that all resources will be committed to jobs considered “most important or urgent”, but the application will avail tasks based on how much resources need an aspect that is very much needed.