Saturday, September 4, 2010

Kernel Definition

The kernel is a program that constitutes the central core of a computer operating system. It has complete control over everything that occurs in the system.


A kernel can be contrasted with a shell (such as bash, csh or ksh in Unix-like operating systems), which is the outermost part of an operating system and a program that interacts with user commands. The kernel itself does not interact directly with the user, but rather interacts with the shell and other programs as well as with the hardware devices on the system, including the processor (also called the central processing unit or CPU), memory and disk drives.

The kernel is the first part of the operating system to load into memory during booting (i.e., system startup), and it remains there for the entire duration of the computer session because its services are required continuously. Thus it is important for it to be as small as possible while still providing all the essential services needed by the other parts of the operating system and by the various application programs.

Because of its critical nature, the kernel code is usually loaded into a protected area of memory, which prevents it from being overwritten by other, less frequently used parts of the operating system or by application programs. The kernel performs its tasks, such as executing processes and handling interrupts, in kernel space, whereas everything a user normally does, such as writing text in a text editor or running programs in a GUI (graphical user interface), is done in user space. This separation is made in order to prevent user data and kernel data from interfering with each other and thereby diminishing performance or causing the system to become unstable (and possibly crashing).

When a computer crashes, it actually means the kernel has crashed. If only a single program has crashed but the rest of the system remains in operation, then the kernel itself has not crashed. A crash is the situation in which a program, either a user application or a part of the operating system, stops performing its expected function(s) and responding to other parts of the system. The program might appear to the user to freeze. If such program is a critical to the operation of the kernel, the entire computer could stall or shut down.

The kernel provides basic services for all other parts of the operating system, typically including memory management, process management, file management and I/O (input/output) management (i.e., accessing the peripheral devices). These services are requested by other parts of the operating system or by application programs through a specified set of program interfaces referred to as system calls.

Process management, possibly the most obvious aspect of a kernel to the user, is the part of the kernel that ensures that each process obtains its turn to run on the processor and that the individual processes do not interfere with each other by writing to their areas of memory. A process, also referred to as a task, can be defined as an executing (i.e., running) instance of a program.

The contents of a kernel vary considerably according to the operating system, but they typically include (1) a scheduler, which determines how the various processes share the kernel's processing time (including in what order), (2) a supervisor, which grants use of the computer to each process when it is scheduled, (3) an interrupt handler, which handles all requests from the various hardware devices (such as disk drives and the keyboard) that compete for the kernel's services and (4) a memory manager, which allocates the system's address spaces (i.e., locations in memory) among all users of the kernel's services.

The kernel should not be confused with the BIOS (Basic Input/Output System). The BIOS is an independent program stored in a chip on the motherboard (the main circuit board of a computer) that is used during the booting process for such tasks as initializing the hardware and loading the kernel into memory. Whereas the BIOS always remains in the computer and is specific to its particular hardware, the kernel can be easily replaced or upgraded by changing or upgrading the operating system or, in the case of Linux, by adding a newer kernel or modifying an existing kernel.

Most kernels have been developed for a specific operating system, and there is usually only one version available for each operating system. For example, the Microsoft Windows 2000 kernel is the only kernel for Microsoft Windows 2000 and the Microsoft Windows 98 kernel is the only kernel for Microsoft Windows 98. Linux is far more flexible in that there are numerous versions of the Linux kernel, and each of these can be modified in innumerable ways by an informed user.

A few kernels have been designed with the goal of being suitable for use with any operating system. The best known of these is the Mach kernel, which was developed at Carnegie-Mellon University and is used in the Macintosh OS X operating system.

It is not necessary for a computer to have a kernel in order for it to be usable, the reason being that it is not necessary for it to have an operating system. That is, it is possible to load and run programs directly on bare metal machines (i.e., computers without any operating system installed), although this is usually not very practical.

In fact, the first generations of computers used bare metal operation. However, it was eventually realized that convenience and efficiency could be increased by retaining small utility programs, such as program loaders and debuggers, in memory between applications. These programs gradually evolved into operating system kernels.

The term kernel is frequently used in books and discussions about Linux, whereas it is used less often when discussing some other operating systems, such as the Microsoft Windows systems. The reasons are that the kernel is highly configurable in the case of Linux and users are encouraged to learn about and modify it and to download and install updated versions. With the Microsoft Windows operating systems, in contrast, there is relatively little point in discussing kernels because they cannot be modified or replaced.

Categories of Kernels

Kernels can be classified into four broad categories: monolithic kernels, microkernels, hybrid kernels and exokernels. Each has its own advocates and detractors.

Monolithic kernels, which have traditionally been used by Unix-like operating systems, contain all the operating system core functions and the device drivers (small programs that allow the operating system to interact with hardware devices, such as disk drives, video cards and printers). Modern monolithic kernels, such as those of Linux and FreeBSD, both of which fall into the category of Unix-like operating systems, feature the ability to load modules at runtime, thereby allowing easy extension of the kernel's capabilities as required, while helping to minimize the amount of code running in kernel space.

A microkernel usually provides only minimal services, such as defining memory address spaces, interprocess communication (IPC) and process management. All other functions, such as hardware management, are implemented as processes running independently of the kernel. Examples of microkernel operating systems are AIX, BeOS, Hurd, Mach, Mac OS X, MINIX and QNX.

Hybrid kernels are similar to microkernels, except that they include additional code in kernel space so that such code can run more swiftly than it would were it in user space. These kernels represent a compromise that was implemented by some developers before it was demonstrated that pure microkernels can provide high performance. Hybrid kernels should not be confused with monolithic kernels that can load modules after booting (such as Linux).

Most modern operating systems use hybrid kernels, including Microsoft Windows NT, 2000 and XP. DragonFly BSD, a recent fork (i.e., variant) of FreeBSD, is the first non-Mach based BSD operating system to employ a hybrid kernel architecture.

Exokernels are a still experimental approach to operating system design. They differ from the other types of kernels in that their functionality is limited to the protection and multiplexing of the raw hardware, and they provide no hardware abstractions on top of which applications can be constructed. This separation of hardware protection from hardware management enables application developers to determine how to make the most efficient use of the available hardware for each specific program.

Exokernels in themselves they are extremely small. However, they are accompanied by library operating systems, which provide application developers with the conventional functionalities of a complete operating system. A major advantage of exokernel-based systems is that they can incorporate multiple library operating systems, each exporting a different API (application programming interface), such as one for Linux and one for Microsoft Windows, thus making it possible to simultaneously run both Linux and Windows applications.

The Monolithic Versus Micro Controversy

In the early 1990s, many computer scientists considered monolithic kernels to be obsolete, and they predicted that microkernels would revolutionize operating system design. In fact, the development of Linux as a monolithic kernel rather than a microkernel led to a famous flame war (i.e., a war of words on the Internet) between Andrew Tanenbaum, the developer of the MINIX operating system, and Linus Torvalds, who originally developed Linux based largely on MINIX.

Proponents of microkernels point out that monolithic kernels have the disadvantage that an error in the kernel can cause the entire system to crash. However, with a microkernel, if a kernel process crashes, it is still possible to prevent a crash of the system as a whole by merely restarting the service that caused the error. Although this sounds sensible, it is questionable how important it is in reality, because operating systems with monolithic kernels such as Linux have become extremely stable and can run for years without crashing.

Another disadvantage cited for monolithic kernels is that they are not portable; that is, they must be rewritten for each new architecture (i.e., processor type) that the operating system is to be used on. However, in practice, this has not appeared to be a major disadvantage, and it has not prevented Linux from being ported to numerous processors.

Monolithic kernels also appear to have the disadvantage that their source code can become extremely large. Source code is the version of software as it is originally written (i.e., typed into a computer) by a human in plain text (i.e., human readable alphanumeric characters) and before it is converted by a compiler into object code that a computer's processor can directly read and execute.

For example, the source code for the Linux kernel version 2.4.0 is approximately 100MB and contains nearly 3.38 million lines, and that for version 2.6.0 is 212MB and contains 5.93 million lines. This adds to the complexity of maintaining the kernel, and it also makes it difficult for new generations of computer science students to study and comprehend the kernel. However, the advocates of monolithic kernels claim that in spite of their size such kernels are easier to design correctly, and thus they can be improved more quickly than can microkernel-based systems.

Moreover, the size of the compiled kernel is only a tiny fraction of that of the source code, for example roughly 1.1MB in the case of Linux version 2.4 on a typical Red Hat Linux 9 desktop installation. Contributing to the small size of the compiled Linux kernel is its ability to dynamically load modules at runtime, so that the basic kernel contains only those components that are necessary for the system to start itself and to load modules.

The monolithic Linux kernel can be made extremely small not only because of its ability to dynamically load modules but also because of its ease of customization. In fact, there are some versions that are small enough to fit together with a large number of utilities and other programs on a single floppy disk and still provide a fully functional operating system (one of the most popular of which is muLinux). This ability to miniaturize its kernel has also led to a rapid growth in the use of Linux in embedded systems (i.e., computer circuitry built into other products).

Although microkernels are very small by themselves, in combination with all their required auxiliary code they are, in fact, often larger than monolithic kernels. Advocates of monolithic kernels also point out that the two-tiered structure of microkernel systems, in which most of the operating system does not interact directly with the hardware, creates a not-insignificant cost in terms of system efficiency.
Regards: http://www.linfo.com/

Wednesday, April 21, 2010

== operator vs equals() method


First of all we have to understand about == operator.
The figure shows that there is two object of Student class.
class Student{
int id;


}









There is two reference variable s1 and s2 referring to the same object of memory address 1001, i.e. the value of reference variable s1 and s2 1001. While, s3 is referring to different object with memory address 1010.
If we compare s1==s2 it will return true because both are referring to the same object.

However, the object on memory address 1010 having the same id (id=1) as referred by s1 even s1==s3 or s2==s3 will return false because they posses different memory addresses i.e. they are referring to two different objects.
It means, == operator compare between the value of reference variable rather than the content of the object.
Now, come to the equals(). This method is defined within Object (the super cosmic) class. In Object class the equals() method is defined in such way that it works just same as == operator. It means, equals() method provided by Object class also compares the value of reference variable instead of content of objects.
But it provides us the facility that we can override the equals() method and compare two objects on the basis of our requirement, i.e. to compare two object on the basis of its content, we can override equals().
e.g. in above Student class we can override equals() like
public boolean equals(Object obj) {
if (obj == null) {
return false;
}
if (getClass() != obj.getClass()) {
return false;
}
final Student other = (Student) obj;
if (this.id != other.id) {
return false;
}
return true;
}
and now if we compare between s1, s2 and s3, if id of object is same then equals() method will return true but s1==s3 or s2==s3 will return false.

Friday, April 16, 2010

Create Executeble jar file

Step 1: Create one .mf file in notepad.
 e.g. myfile.mf and write following with one blank line
 (i) Manifest-Version: 1.0
 (ii) Main-Class: packageName.MyBeginingClass
 (iii)
Step 2: jar cvmf myfile.mf jarFile.jar *.class






--
Amit Ranjan
Java developer
.............................
P please consider the environment before printing this e-mail-- SAVE PAPER!

Disadvantage of inheritance

i. Main disadvantage of using inheritance is that the two classes (base and inherited class) get tightly coupled.
ii. This means one cannot be used independent of each other.
iii. Also with time, during maintenance adding new features both base as well as derived classes are required to be changed.
iv. i.e. If a method signature is changed then we will be affected in both cases (inheritance & composition)


v. Either of both of them are changed to accommodate changes.
vi. This further makes them more tightly coupled and lots of contextual information tends to be added.
vii. If a method is deleted in the "super class" or aggregate, then we will have to re-factor in case of using that method.Here things can get a bit complicated in case of inheritance because our programs will still compile, but the methods of the subclass will no longer be overriding superclass methods. These methods will become independent methods in their own right.

Thursday, April 15, 2010

Abstraction vs Encapsulation

Abstraction:
A simple definition of abstraction is to hide actual implementation of an object from the external world that would use the object and just exposing the functionality.
For example,
i. A program that is drawing circles and squares using those objects need not know how those objects are implemented. It is enough for the program to know what the behavior of these objects is, and how to use these objects (rather than how these objects are implemented internally).
ii. When you change the gear of your car, you know the gears will be changed without knowing how they are functioning internally. Abstraction focuses on the outside view of an object (i.e. the interface)
iii. Taking one another example of electric switch board in our house. Wires inside the switch board are encapsulated (hidden) and switches are like methods using which we can put-on/off the bulb/fan. And the process behind the switch-on/off event is abstracted, i.e. we can never know, how bulb/fan is working when we put-on the switch.
iv. The benefit is that you don't need to deal with the (potentially big and complicated) inner workings of the object in order to use it. So you are protecting the user from the object.
Encapsulation:
i. Combining the data(properties/attributes/information) and the methods (behaviors of class with the data) that can manipulate that data into one capsule (class/object) with guarantees that the encapsulated data is not accessed by any other function/method outside the encapsulated object and must follow the programmer's/ owner's strategy regarding manipulation of data members.
ii. In other words, Encapsulation means put the data and the function that operate on that data in a single unit (information hiding) which prevents clients from seeing it's inside view i.e. clients can view data by the interface given by the programmer by means of methods only.
iii. Such methods hold the behavior of the abstraction (i.e. hide the answer of how it works) is implemented.
iv. Encapsulation can be achieved by using proper access modifiers like, public, private, etc to control the scope of the variables.
v. Encapsulation promotes separation of state of an object from its behavior.Hence, we can say that when we encapsulate data and methods that operate on data into one object, the external program that uses this object need not know the internal workings of the object to use the object. Thus is making the object abstract data type to the external program.

Plz. click any one advertisement shown on this blog to manage this blog.