Sunday 29 December 2013

Variables, Scope and Parameter passing


This post serves to explain what exactly pass by reference and pass by value mean in the context of programming and as to how heaps and stacks relate to them.

It is a known fact that both C and Java pass by value.
Let us consider the following segment of code:

int main () {
int a = 5;
int b = 4;
swap (&a, &b);
printf ( “%d : ” , a );
printf (“%d : ” , b );
return 0;
}
void swap ( int *x , int *y ) {
int temp = *x;
*x = *y;
*y = temp;
}
Listing 1

The above code swaps the values a and b so that a contains 4 and b contains 5.
The function call, swap (&a, &b); passes the reference of the variables a and b and this facilitates the actual swap of the values contained in the two variables. Note that references, &a and &b are themselves passed as values and this is precisely why we say function calls in C pass by value.

Local variables or Stack variables

Now, if we take a look at the variable temp in Listing 1, we can see that it is a local variable. Variables which are local in scope are allocated on the stack and when a function returns (or when the last instruction in the function has been executed), the local variables on the stack corresponding to that function are invalidated and the memory thus reclaimed is open for reallocation.

In Listing 1, variables a and b defined in the main function are local variables which are local in scope to the main function and thus reside on the stack. These variables remain stack-resident till the time the main does not return (until after return 0). Likewise, temp is a local variable defined in the swap method and its scope is confined to the swap method only. After the final instruction in swap has been executed temp is invalidated and the memory thus claimed may be re-allocated.

It is important to understand just how relevant this is sometimes and understand its relation to pass by value.

Consider Listing 2.
int sum ( int a , int b )
{
          int s = 0;
          s = a + b;
          return s;
}
int main () {
          int a = 4;
          int b = 5;
          int s = sum ( a, b );
          printf ( “sum: %d\n” , s );
}
Listing 2

Now, we can see that the values contained in a and b are passed to the method sum. It is important to understand here that in C, copies of the values contained in a and b are created and passed as function arguments, whereas in Java, the references of a and b are themselves passed as values. While this may be suggestive of pass by reference, the references are themselves treated as values and it is because of that we say both C and Java pass values.

Now, when the control returns from the function sum, variable s is out of scope and the memory reclaimed and after printf prints sum: 9, variables a and b will also be out of scope. Thus, we can see that stack-resident variables have a scope that is confined to the block where they have been defined.

Variables allocated on the Heap

In C, variables declared outside function body and the variables created using malloc are allocated on the Heap. In C, the variables declared outside function body have file-level scope and may be accessed by all methods contained in that file. Now, variables allocated using malloc remain allocated in the heap till the time you do not explicitly free them using ‘free’ method. In Java, objects allocated using the ‘new’ keyword are allocated on the heap and they are automatically garbage- collected by the JVM when they are no longer used.

Stack and Heap distribution and impact on variable scope

Each thread in a program has its own stack space while all the threads share the same heap space.

Consider the following block of code in Listing 3:
public String foo ( String city1 , String city2 ) {
String city3 = city1 + “,” + city2;
save (city3);
return city3;
}
Listing 3

As it may be apparent, each thread invokes the method foo with its own values for ‘city1’ and ‘city2’ and the method stores the concatenated result in a database using the save function and the reference to the result is also returned. Now, it is important to understand a few things here:

·       city3 is an object of String type and the statement,
String city3 = city1 + “,” + city2; initializes it.

·     Upon initialization, the object that city3 refers to resides on the heap while the reference to it (the name, city3) resides on the stack corresponding to the calling thread.

·   The heap, as mentioned above is shared by all threads but each thread has access to its own local portion of heap, called arena.
·     Now, the stack space corresponding to a thread may in itself be divided into multiple frames, each frame corresponding to a different function or method and bearing variables that are local in scope to that method.

Bullet 3 tells us that the object for a thread, Thread1 referred by ‘city3’ is stored on the heap arena corresponding to Thread1 and thus local in scope to Thread1. From bullet 4, we can see that the reference itself, that is, the name, ‘city3’ is local in scope to the function foo and thereby resident on the stack frame corresponding to foo in the stack space for Thread1.

As the value of the reference or address to the referred object is returned by foo, the copy of the reference, ‘city3’ is in itself invalidated whereas the object itself will remain existent till the time it is not reclaimed by JVM. This is to re-iterate that Java also exchanges data between methods by passing their values which may be data values or values of addresses.

Key things to remember:

1. Threads have their own stack space

2. Heap space is shared between threads but each thread has access to its own portion of the heap space where objects initialized by that thread (using ‘new’ or ‘malloc’ keywords or by any other method of object initialization such as the method shown above for String initialization) are resident until they are not freed explicitly by the user (e.g. using free method for malloc) or automatically (e.g. JVM garbage collection).

3. Methods in a Thread stack have their own stack frames which contain the variables local to those methods.

4. C and Java pass by value.
Two separate C programs have been given below that help us understand the above mentioned ideas. The first one is unreliable in that it passes values of references to local variables resident on its stack outside of its scope while the other works reliably by allocating the values to be passed on the heap by using the malloc keyword.

UNRELIABLE CODE:
#include <stdio.h>
  long power ( int *a );
  long doubler ( int *a );
 
  int main ( int argc , char *argv[] ) { 
    printf ( "Command line argument: %s\n" , argv[1] );
    int operand = *argv[1] - '0';
    printf ( "Entered cmdline operand :%d\n" , operand );
     
    long *result1;
    long *result2;
   
    result1 = power (&operand);
    printf ( "Power of %d: %lu\n", operand, *result1 );
   
    result2 = doubler (&operand);
    printf ( "Double of %d: %d\n", operand, *result2 );
    return NULL;
  }

 long power ( int *a ) {
    printf ( "*a : %d\n" , *a );
    long result1;
    result1 = (*a) * (*a);
    printf ( "*result1 : %lu\n" , result1 );
    return &result1;
  } 

  long doubler ( int *a ) {
    printf ( "*a : %d\n" , *a );
    int result2;
    result2 = (*a) * 2;
    printf ( "*result2 : %d\n" , result2 );
    return &result2;   
  }

RELIABLE CODE:

#include <stdio.h>

  long *power ( int *a );
  int *doubler ( int *a );
 
  int main ( int argc , char *argv[] ) {
 
    printf ( "Command line argument: %c\n" , *argv[1] );
    int operand = *argv[1] - '0';
    printf ( "Entered cmdline operand :%d\n" , operand );
     
    long *result1;
    int *result2;
   
    result1 = power (&operand);
    printf ( "Power of %d: %lu\n", operand, *result1 );
   
    result2 = doubler (&operand);
    printf ( "Double of %d: %d\n", operand, *result2 );
      
    return NULL;
  }
 
  long *power ( int *a ) {
    printf ( "*a : %d\n" , *a );
    long *result1 = (long *) malloc( sizeof (long *));
    *result1 = (*a) * (*a);
    printf ( "*result1 : %lu\n" , *result1 );
    return result1;
  }
 
  int *doubler ( int *a ) {
    printf ( "*a : %d\n" , *a );
    int *result2 = (long *) malloc( sizeof (int *));
    *result2 = (*a) * 2;
    printf ( "*result2 : %d\n" , *result2 );
    return result2;
   }

Wednesday 4 December 2013

Intrinsic, Explicit and Client-Side Locking

Over the last some days, I have been reading about different locking mechanisms in Java and what interests me most about them is their so near-same yet different nature. Their differences are so subtle and delicate that it is not unusual to mistake one for the other. 

Intrinsic locking is based on locking upon the calling object or "this". When we simply append "synchronized" keyword to a method signature, what we are in fact implying is synchronized("this").

Explicit locking is relying on the lock used by the underlying private data member of the class. This encompasses delegation of thread safety to the class members,  and use of thread-safe members in the class such as ConcurrentHashMap, List and Vector.

Client-side locking relies on the lock employed by the class member itself. While, on first glance, this may seem exactly similar to explicit locks, it is a little different in essence. Explicit locks are used while performing mundane operations associated with the class object such as list.add(), list.removeAll() etc. Client-side locks are like wrappers over these collection objects that lock on the explicit lock associated with the object itself. Such a use case may be contemplated when you wish to perform operations with more than one data member of the class in a single atomic transaction. It relies on explicit locks of each of the concerned data members for manipulating them and uses its own wrapper design to encapsulate them in an atomic transaction. 

Friday 22 November 2013

Java Tip: Do not attempt to call a method in a superclass once it has been overridden

Well, it is impossible to call a method in superclass once it has been overridden in the subclass. Well, little things like these are basics of Java and OOP but it is not very unusual to forget them and see some pretty amusing results. I will illustrate the same with an example:

Consider the following classes Mock, MockSubClass and MockerMain.

As the name goes, MockSubClass inherits from the parent class Mock and MockerMain is the class bearing my main method.

The class definitions have been given below:
package mocker;

import java.util.Collection;
import java.util.Set;

MOCK
public class Mock {

Set s;

protected Mock (Set newSet) {
s = newSet;
}

protected void addMethod (Object o) {
s.add(o);
}

protected void addMethodAll (Collection c) {

System.out.println("Inside method addMethodAll\n");

for ( Object o: c ) {
addMethod (o);
}
}

protected Collection getCollection () {
return s;
}
} //end class Mock

MOCKSUBCLASS

package mocker;

import java.util.Collection;
import java.util.Set;


public class MockSubClass extends Mock{

/**
*/
int addCount;
public MockSubClass (Set c) {
// TODO Auto-generated constructor stub
super (c);
addCount = 0;
}
public void addMethod (Object o) {
System.out.println("Inside MockSubClass.addMethod\n");
addCount++;
super.addMethod(o);
}
public void addMethodAll (Collection c) {
int size = c.size();
addCount = addCount + size;
super.addMethodAll(c);
}
public int getAddCount () {
return addCount;
}
public Collection getCollection () {
return super.getCollection();
}

}//end class MockSubClass

MOCKERMAIN

package mocker;
import java.util.Arrays;
import java.util.HashSet;
import java.util.Set;

public class MockerMain {

/**
* @param args
*/
public static void main(String[] args) {
// TODO Auto-generated method stub
int count = 0;
Set <String> s = new HashSet <String> ();
MockSubClass Obj = new MockSubClass (s);
Obj.addMethod("LV");
count = Obj.getAddCount();
System.out.printf("1.)"+"count = "+count+"\n");
Obj.addMethodAll(Arrays.asList("LA","Sacramento","SF"));
count = Obj.getAddCount();
System.out.printf("2.)"+"count = "+count+"\n");
System.out.printf("3.)"+"List = "+Obj.getCollection()+"\n");
}

}//end class MockerMain

Well, what I am looking to do here is to create a collection and add some random strings to the collection and use the method getAddCount() in MockSubClass to retrieve the value of number of elements in the collection currently. I inserted one element first and then the addCount got updated accordingly in MockSubClass. Now, I am adding three more elements and the 3 is being added to the current value of addCount which is 1. Well, that's what the code appears to be doing. But, here is where we need to understand that addMethodAll in MockSubClass makes a call to addMethodAll in the superclass, Mock. On careful examination of the method, addMethodAll in Mock, we can see that it in turn makes a call to addMethod one time each for each element of the collection which is being added. The method, addMethod has been overridden by the subclass, so the call is actually serviced by the version in the subclass and not by the one in the superclass itself. This therefore leads to addition by another 3 and the value of addCount at this point is 7 and not 4 as required.

Output:
Inside MockSubClass.addMethod

1.)count = 1
Inside method addMethodAll

Inside MockSubClass.addMethod

Inside MockSubClass.addMethod

Inside MockSubClass.addMethod

2.)count = 7
3.)List = [LA, LV, SF, Sacramento]


Thursday 21 November 2013

Understanding the impact of Encapsulation in Programming

I have been learning about OOP principles for a long time now and Encapsulation or Data Hiding interested me most. This is because, at the face of it, it sounds and appears most simple. But, for a fact, I always knew given my little practical experience that I understood very little about it and as to why it even mattered. 
I had been reading all along about it in different books on Java and thereafter online. Some of them helped me attain an insight while the others were more like text book material. I finally happened to read "Effective Java" by Joshua Bloch and I can see things getting better already.
The most interesting idea underlying the declaration of data member variables of a class as "Private" or in a more relaxed environment, "Default" (commonly referred to as Package-Private) is to mitigate effects due to coupling between different modules of a program or the application itself in a broader sense. Such a careful design ensures that the different modules are developed, tested and maintained as independently from one another as possible. This ensures that any holes or bugs present in one module do not seriously impair another module and thus preventing the effects of one or two holes in one or two isolated modules from hampering the entire system itself. This is in keeping with maintainability, of course.
Just to give you a simple example, consider the following classes, Square and AreaOfSquare:
class Square {
public int side;
}// end of class Square

class AreaOfSquare {
private int area;
public int getArea (Square s) {
area = s.side*s.side;
return area;
}
}// end of class AreaOfSquare
Well, while it is a rather funny thing to declare "side" as "int", it is still a possibility especially if the designer never anticipated the need for a more precise definition of this parameter. Now, say tomorrow. this designer felt like going back on his design and getting the declaration changed to "double" instead, he would have to also change the declaration of "area" inside the AreaOfSquare class to "double". It is reasonably simple for such one-one dependency between two classes, but if we had a few more classes that were dependent on Square, it would be a fairly complex undertaking from the testing and maintenance perspective. In fact, it would even be reasonable to make an assumption that the designer would himself be unable to recollect all such dependencies and do the repairs accordingly.

Friday 15 November 2013

Revival of the Functional Programming Paradigm

I have been programming for some years now and I started with C++ (unusual because C is the most general starter) back in school. Then as I moved to high school and later on, undergrad, I learnt a little more of C++ and then it was Java all along. It should be about 6 years since I started programming in Java (on an academic level, that is) and all this is only to say that Imperative Programming which is pretty intuitive and easy to learn has become quite popular today.
Somewhere in my final year of undergrad, I was exposed to Lisp and I never quite got the head or tail of it for it seemed to work rather differently. Luckily for me, then, it was just an introduction and confined to a few basic programming examples and I didn't have to worry about learning the same from a test perspective. Then, last year, my professor remarked rather casually that Functional Programming which is a rather old paradigm has now a future and that it would be a great idea for passionate programmers to revisit it. 
I did start exploring Scala (a functional programming language) and my initial exploration was largely confined to a handful of web links that didn't really get the crux of the idea of Functional Programming across to me. Then, I happen to stumble upon this great YouTube video in which leading scientist, Martin Odersky speaks about Functional Programming at the O'Reilly OSCON Java 2011 conference. I have linked the video below and like I always do, provided a brief explanation regarding the same a little later, down below.



One of the biggest challenges faced by programmers these days is Non-Determinism. Now, consider a program having the following block of code below:
x = x + 1;
x = x / 2;
Say, you have two threads running in the same program. What is deterministic here is that each thread runs the above block of code in programmatic order from top to bottom within itself. But, what is non-deterministic, however, is the order in which these two lines of code execute in the two threads with respect to each other. Let me provide an example here to make things a little more clearer: 
One possible execution context, A:
Thread 1: x = x + 1
Thread 2: x = x + 1
Thread 2: x = x / 2
Thread 1: x = x / 2
Another valid execution context, B:
Thread 1: x = x + 1
Thread 1: x = x / 2
Thread 2: x = x + 1
Thread 2: x = x / 2
There is in fact, a few more possible execution execution contexts. But, what's clear is that in multi-threaded programs, the sequence of execution of instructions with respect to Time as a dimension is not certain or deterministic  given that it is entirely in the operating system's discretion to schedule and dispatch threads for execution. While memory models such as JMM guarantee top-down order of execution within each thread, there are really no guarantees regarding the interleaving of these instructions with respect to different concurrent threads as can be seen from the above example.
This Non-Determinism is a result of two widely seen aspects of modern day programs:
a. Parallelism , b. Variable/Reference Mutability
We have already seen pretty clearly as to how Parallelism can lead to uncertainty or Non-Determinism. What we need to see now is the role of Variable/Reference Mutability to bring about the same effect.
Taking a deeper look at the above example, we can see that, x is a variable and its value is being mutated in the program. Now, if x is a thread-local variable, there is really no reason to worry for the programming language memory model ensures that the instruction execution sequence is strictly the program order. But, that's only as far as x is thread-local. If x is a global variable (for the two threads), the final value of x depends on the complete execution sequence. 
Let's consider contexts A and B now:
If the initial value of x is 1, execution context A would give final value, 0.75 whereas B would result in final value, 1 for x.
This is because the value of x computed at each stage is dependent on the value of x from the previous stage. We can now quite clearly see as to how Variable/Reference Mutability can play its part in rendering the result of the program, non-deterministic.
Here is where Functional Programming comes to the rescue with its varied emphasis upon looking at the problem from the perspective of Space as against Time. This different approach now accomplishes Parallelism and better resource utilization, but differently. Here, a task is divided into units that can be independently worked on by multiple execution units working in parallel. That is, execution unit works on a specific task unit and its execution scope is outside the scope of execution of other execution units. This is in contrast to division of task into sub-units that are worked upon by execution units in a pipeline fashion one after the other. While such pipelined execution is faster than the simplistic one sub-unit at a time execution, it is definitely not parallelism in the true sense. 
So, we have now seen as to how Functional Programming aids Parallelism by knocking off the impact of aspect b, which is Variable/Reference Mutability. Languages such as Scala have given Functional Programming a whole new face by combining its benefits with those of OOP and modern day trends such as Agile Development. This is sufficient to see that solutions to needs of the future largely lie across a multitude of individual candidate solutions.







Wednesday 6 November 2013

Understanding Factory Methods

I have myself been using factory methods for a while now and it would be factually incorrect to tell you that I never cared to understand why I used them as against plain constructor "new" method of object instantiation. Nevertheless, I must definitely admit that I would have read about them several times but wasn't really fortunate to remember all these benefits clearly. This called on me to write a post regarding the same so that it would get entrenched in my head thereafter.

Now, let's quickly get started:


  • The use of getInstance() or valueOf() in a factory design pattern helps you to procure a reference to an object of a type without having to explicitly worry about the sub-type of the object returned. Hazy? It surely would be no better with an explanation of this sort. Now, let me dig deeper.
Consider an interface, "Car" which is implemented by "Mazda" and "Sedan". Now, it is fairly clear that both of these are cars themselves and as such, one can do with a reference to an object of either Mazda or Sedan when one needs to procure a reference to an object of Car type. So, it's fairly clear that the application of a framework such as Spring in Java that relies on Dependency Injection (which can be understood more simply as some form of Late Binding or Run-time Binding ) can exploit this compile-time abstraction and as such benefit from the same.

  • Among other benefits, what interests me most is an idea based upon an extension to the factory pattern which limits the number of objects of a class to one. Such a class is called a Singleton and it is typically a means to reuse existing class objects where possible rather than creating new ones. Thus, it is not difficult to understand that this is in keeping with memory efficiency. 

Wednesday 30 January 2013


Market Interest in Elastic Cloud Infrastructure Accelerates — “With enterprises, web application companies and service providers searching for true elastic infrastructure solutions, we are seeing increasing prospect and sales activity surrounding our Open Cloud System,” said Michael Grant, CEO of Cloudscaling. “To support that growth," he continued, "we’ve made thoughtful additions to the leadership of our channels, engineering, enterprise sales and product management functions."

The above excerpt from CloudBlogs daily makes it all the more clear that organizations are now increasingly seeking to adopt an Elastic Cloud infrastructure to cater to their scalability needs. As pointed out earlier, Elastic Cloud provides higher performance in a scenario marked by extremely large number of requests, that typically is a bottleneck for conventional cloud based operation.

Sunday 13 January 2013

Different Virtualization Paradigms

This post was imminent. Unlike the other posts, the source of content of this post is a web link rather than a YouTube video clip.

For my readers' reference, the link [1] is provided in the reference section below.

From [1], we can see that Virtualization manifests itself in four forms:


                                                       Fig. 1: Hardware Virtualization [1]

First and foremost, Hardware Virtualization which is nothing but emulation of underlying hardware. This name is quite misleading for it may suggest that the application running atop the hardware is not serviced by the hardware itself, but instead by a software that emulates the hardware. Although this interpretation is partially valid, what we need to remember here is that the application of hardware virtualization is not exactly to substitute the hardware with its emulator but rather help application developers and designers test and debug their application code and check its behavior in the target environment. This enables them to perform the preliminary tests even when the actual hardware is not available. We can see from Fig. 1 that multiple hardware virtual machine instances run on top of the hardware layer, each of which emulates a different hardware environment. E.g. VM1 may emulate a system with 4 GB physical memory and VM2 may emulate a system with 2 GB physical memory.


                                                          Fig. 2: Full Virtualization [1]

Secondly, we have Full Virtualization. To all my readers who have used vmware, this is exactly the category of virtualization that vmware falls under. Here, a program layer called hypervisor or Virtual Machine Manager (VMM) runs on top of the hardware layer and several guest operating systems may be installed above this VMM layer. The idea here is not to emulate the hardware but to make the presence of multiple guests transparent to each of them. This means that each of the guest operating systems feels that it is the unique holder of the underlying hardware resources. What we need to understand here is that each guest operating system is believed to be executing as a separate virtual machine. In other words, what we essentially mean by virtual machine is a framework or abstraction that houses a single guest. The hypervisor is tasked with handling protected instructions that require access to the hardware resources which are not held by the guests in the real sense. Fig. 2 shows a VMM layer intermediary between the hardware and the guest operating system which monitors and manages each of the guests above and co-ordinates their access to the underlying hardware in a manner that keeps the presence of multiple guests transparent from each of them.


                                                           Fig. 3: Paravirtualization [1]

Another type of virtualization technique is called Paravirtualization. This is similar to Full Virtualization, the only difference being that support for virtualization is extended in the guest operating system itself. This means that the guest operating system code is virtualization-aware and it provides its assistance and co-operation to aid the hypervisor with the execution in a virtual environment. The thin cream strip shown in the guest operating system section in Fig. 3 represents the virtualization aware code that has been added to each of the guests to enable them to co-operate with the VMM.


                                                           Fig. 4: OS level Virtualization [1]

Besides the three we have discussed above, there is also a virtualization at the operating system level. In my opinion, this is nothing but the commonly encountered notion of concurrent processes running in a system. We know that the operating system can create new processes dynamically and then perform certain management tasks such as scheduling, resource allocation and commitment. Here we can have multiple instances of the same process and use these separate instances to service separate requests. This is precisely how a server handles multiple incoming requests using the same, single physical resource. It creates multiple, logical instances of the single physical resource to create the illusion that it has not just one but more than one unit of each resource.

So what's the idea behind this post in this blog page? Well, to be frank with you, this post may be seen as a sister post of the previous one which introduced virtualization as a current IT trend. In this post, I compare and contrast the commonly seen manifestations of virtualization to better discern one from the other. One thing I can add here is that the fourth paradigm is what businesses use these days to reduce their maintenance costs. That said, I clarify yet again that the role of this post is only to present to you folks the more intricate, technical details of virtualization. That's it for now, stay tuned for more updates.

Reference:
[1] http://www.ibm.com/developerworks/library/l-linuxvirt/index.html

Friday 11 January 2013

Virtualization: Why do we need it?

We have been lately hearing a lot about Virtualization whenever there has been a talk about cloud computing. Although most of us may have ourselves used virtual machines in the past to solve a very different purpose altogether, most of us are not really sure what virtualization is and why it is so beneficial!

As usual, I post link to a relevant YouTube video here to get you readers started:


The contents of the video are fairly complete, but I might as well expostulate the same. Most businesses often use a combination of a number of application servers, web servers, image servers, document servers, audio and video servers, and not to forget the database servers.

Although contemporary web usage trends may suggest that all of the above mentioned hardware infrastructure is being used well almost all the time, this is largely a myth and more precisely, an ill-founded specious belief! If 75% of the hardware appears as being used at any point of time on the basis of average number of server requests recorded, the servers are still largely under-utilized. Hmm, it's a bit of a challenge to present this information more convincingly, but, I shall nevertheless give it a try!

What appears as active to us is largely superficial. The servers typically take only about (1-10) ms to service each request. If my estimate is flawed, I can only tell you that it should be much faster! Given this extremely short amount of time taken to service the request, the amount of time the server machine is kept up and running relative to the actual time spent by it servicing the requests, is much higher. This clearly demonstrates that a significant amount of energy is wasted per server in the process of keeping the servers up and ever-ready to service requests upon their arrival. I must again reiterate that the cumulative energy wasted is actually pretty high considering the fact that we use not one server for each purpose, but a number of them for different purposes.

What we must remember here is that efforts to maximize the server utilization is limited by the number of incoming server requests. So, if you have done your best to ensure that a server spends a good fraction of the time servicing requests, this is only as much as the number of requests the server receives at any point of time. So, how exactly do we eliminate this wastage and thereby maximize the profits?The answer to this problem lies with virtualization.

Virtualization essentially means to create multiple, logical instances of software or hardware on a single physical hardware resource. This technique simulates the available hardware and gives every application running on top of it, the feel that it is the unique holder of the resource. The details of the virtual, simulated environment are kept transparent from the application. Organizations may use this technique as illustrated by the video to actually do away with many of their physical servers and map their function onto one robust, evergreen physical server. The advantage here is the reduced cost of maintenance and reduced energy wastage which is not very surprising. As you have fewer physical servers, you need only maintain them and therefore maintenance becomes much easier and cheaper too. As for energy conservation, it is fairly implicit. The amount of energy wasted is a function of the number of physical servers which is clearly much lower in a virtualized environment. Also as far as desktop virtualization is concerned, as the video points out, updates may now be made available much sooner as a single firmware update does not update one client machine, but several instances of the same.

Now, I am not extending the scope of this post to include the technical minutiae. This post is only targeted at enlightening the readers with regard to why exactly we need virtualization. The working details will be covered in a subsequent post which is due shortly :-P