Computer Organization & Architecture – Designing for Performance….

May 31, 2015

computer organization - designing for performance

For my first introduction , just borrowed a book from National Library (PNM) entitled Computer Organization & Architecture – Designing for Performance written by William Stallings. This book mainly tell us about the computer architecture , the CPU , memory , processor , I/O devices , the control unit  and parallel organization. It tells us about how computers are organized  and made of , the definitions of a computer system , the Pentium Family Processor and PowerPC and so on…The memory of the computer system – the cache memory , DDR SDRAM memory are also discussed in this book.

RAM technology is divided into two technologies: dynamic and static. A dynamic RAM (DRAM) is made with cells that store data as charge on capacitors. The presence or absence of charge in a capacitor is interpreted as a binary 1 or 0. Because capacitors have a naturaltendency to discharge , dynamic RAMs require periodic charge refreshing to maintain data storage. The term dynamic refers to this tendency of the stored charge to leak away , even with power continuously applied.

When only a small number of ROMs with a particular memory content is needed , a less expensive  alternative is the programmable ROM (PROM). Like the ROM , the PROM is nonvolatile and may be written into only once. For the PROM , the writing process is performed electrically and may be performed by a supplier or customer at a time later than the original chip fabrication. Special equipment is required for the writing or “programming” process. PROMs provide flexibility and convenience. The ROM remains attractive for high-volume production runs.

In a typical DRAM , the processor presents addresses and control levels to the memory , indicating that a set of data at a particular location in memory should be either read from or written into the DRAM. After a delay , the access time , the DRAM either writes or reads the data. During the access-time delay , the DRAM performs various internal functions , such as activating the high capacitance of the row and column lines , sensing the data , and routing the data out through the output buffers. The processor must simply wait through this delay , slowing performance.

With the synchronous access , the DRAM moves data in and out under control of the system clock. The processor or other master issues the instruction and address information , which is latched by the DRAM. The DRAM then responds after a set of number of clock cycles. Meanwhile , the master can safely do other tasks while the SDRAM is processing the request.

InfiniBand is a recent I/O specification aimed at the high-end server market. The first version of the specification was released in early 2001 and has attracted numerous vendors. The standard describes an architecture and specifications for data flow between processors and intelligent I/O devices. InfiniBand is intended to replace the PCI bus in servers , to provide greater capacity , increased expandability , and enhanced flexibility in server design. In essence , InfiniBand enables servers , remote storage, and other network devices to be attached in a central fabric or switches and links. The switch-based architecture can connect up to 64,000 servers , storage systems , and networking devices.

The Pentium Processor – Register Organization – The register organization includes the following type of registers:

1. General: There are eight 32 bit general purpose registers. These may be used for all types of Pentium instructions; they can also hold operands for address calculations. In addition , some of these registers also serve special purposes. For example , string instructions use the contents of the ECX , ESI , and EDI registers operands without having to reference these registers explicitly in the instruction. As a result , a number of instructions can be encoded more compactly.

2. Segment: The 16-bit segment registers contain segment selectors , which index into segment tables. The code segment (CS) register references the segment containing the instruction being executed, The stack segment (SS) register references the segment containing a user-visible stack. The remaining segment registers (DS,ES,FS,GS) enable the user to reference up to four separate data segments at a time.

* The rest you can find at page 442 Chapter 12 (Processor Structure and Function).

In conclusion , this book is a great book to read if you want to know about the computer architecture and organization , starting from the 80386 to Pentium 4  processor. For parallel organization or parallel processing , you can check it out at page 637 and 638 of the book. Stallings provides a clear, comprehensive presentation of the organization and architecture of modern-day computers, emphasizing both fundamental principles and the critical role of performance in driving computer design. The text conveys concepts through a wealth of concrete examples highlighting modern CISC and RISC systems.

p/s:- Some of the excerpt are taken from this book – Computer Organization & Architecture – Designing for Performance – 7th Edition –  written by William Stallings , published by Pearson Prentice Hall.


Windows Group Policy…..

April 28, 2015

windows group policy

Just got a book from National Library entitled Windows Group Policy written by William R. Stanek. For my first introduction , Group Policy is a set of rules that you can apply throughout the enterprise. Although you can use Group Policy to manage servers and workstations running Windows 2000 or later , Group Policy has changed since it was first  implemented with Windows 2000. Group Policy settings enable you to control the configuration of the operating system and it’s components. You can also use policy settings to configure computer and user scripts , folder redirection , computer security , software installation , and more.

Now , I’m writing some description and notes about Chapter 2 – Deploying Group Policy of the book. Unlike Windows 2000, Windows XP Professional, and Windows Server 2003, Windows Vista and Windows Server 2008 use the Group Policy Client service to isolate Group Policy notification and processing from the Windows logon process. Separating Group Policy from the Windows logon process reduces the resources used for background processing of policy while increasing overall performance and allowing delivery and application of new Group Policy files as part of the update process without requiring a restart.

Each new version of the Windows operating system introduces policy changes. Sometimes these changes have made older policies obsolete on newer versions of Windows. In this case the policy works only on specific versions of the Windows
operating system, such as only on Windows XP Professional and Windows Server 2003. Generally speaking, however, most policies are forward compatible. This means that policies introduced in Windows 2000 can, in most cases, be used on
Windows 2000, Windows XP Professional, Windows Server 2003, Windows Vista, and Windows Server 2008. It also means that Windows XP Professional policies usually aren’t applicable to Windows 2000 and that policies introduced in Windows
Vista aren’t applicable to Windows 2000 or Windows XP Professional.

On a computer running Windows Vista, Windows Server 2008, or later versions, you’ll automatically see the new features and policies as well as standard features and policies when you use GPMC 2.0 or later to work with Group Policy. However, the new features and policies aren’t automatically added to Group Policy objects (GPOs). Don’t worry—there’s an easy way to fix this, and afterward you’ll be able to work with new features and policies as appropriate throughout your domain.

With the original file format used with policies, called ADM, policy definition files are stored in the GPO to which they relate. As a result, each GPO stores copies of all applicable policy definition files and can grow to be multiple megabytes in size. In contrast, with the ADMX format, policy definition files are not stored with the GPOs with which they are associated by default. Instead, the policy definition files can be stored centrally on a domain controller and only the applicable settings are stored within each GPO. As a result, GPOs that use ADMX are substantially smaller than their counterparts that use ADM. For example, while a GPO that uses ADM may be 4 megabytes (MB) in size, a GPO that uses ADMX may be only 4 kilobytes (KB) in size.

The way domain controllers replicate the SYSVOL depends on the domain functional level. When a domain is running at Windows 2000 native or Windows Server 2003 functional level, domain controllers replicate the SYSVOL using File Replication Service (FRS). When a domain is running at Windows Server 2008 functional level, domain controllers replicate the SYSVOL using Distributed File System (DFS).

The storage techniques and replication architectures for DFS and FRS are decidedly different. File Replication Service (Ntfrs.exe) stores FRS topology and schedule information in Active Directory and periodically polls Active Directory to retrieve updated information using Lightweight Directory Access Protocol (LDAP). Internally, FRS makes direct calls to the file system using standard input and output. When communicating with remote servers, FRS uses the remote procedure call (RPC) protocol.

Active Directory supports three levels of Group Policy objects:
1. Site GPOs Group Policy objects applied at the site level to a particular Active Directory site.
2.  Domain GPOs Group Policy objects applied at the domain level to a particular Active Directory domain.
3. Organizational Unit (OU) GPOs Group Policy objects applied at the OU level to a particular Active Directory OU.

Through inheritance, a GPO applied to a parent container is inherited by a child container. This means that a policy preference or setting applied to a parent object is passed down to a child object. For example, if you apply a policy setting in a domain, the setting is inherited by organizational units within the domain. In this case, the GPO for the domain is the parent object and the GPOs for the organizational units are the child objects. In an Active Directory environment, the basic order of inheritance goes from the site level to the domain level to the organizational unit level. This means that the
Group Policy preferences and settings for a site are passed down to the domains within that site, and the preferences and settings for a domain are passed down to the organizational units within that domain.

To end this chapter , I encouraged you all to read the rest of the description about Group Policy in this chapter 2 , and also the rest of the chapter about Group Policy in this book. It’s quite interesting to read about….

p/s:- This is an excerpt taken from the book – Windows Group Policy – Administrator’s Pocket Consultant written by William R. Stanek and published by Microsoft Press.


GWT In Practice……

March 30, 2015

GWT In Practice







Recently , just borrowed a book from National Library entitled GWT In Practice written by Robert T. Cooper and Charlie E. Collins . GWT stands for Google Web Toolkit.  GWT is a Java to JavaScript  cross-compiler . That is , it takes Java code and compiles it into JavaScript to be run in a browser.Other aspects that set GWT apart include a harness for debugging Java bytecode directly as it executes in a simulated browser environment, a set of core UI and layout widgets with which to build applications, a Remote Procedure Call (RPC) system for handling communications with a host web server, internationalization support, and testing mechanisms. Another of the reasons GWT is significant and is different from some other RIA offerings is that it provides tooling and testing support. GWT includes a powerful debugging shell that allows you to test and debug your code as it interacts with the native browser on your platform.

The testing support GWT provides is based on JUnit and on a few extensions the toolkit provides. Your GWT code can be tested as Java, from the shell. After you compile your code into JavaScript, the same test can be used again in that form by using further scaffolding provided by GWT. This allows you to test on various browser versions and, if desired, even on different platform and browser combinations.

The GWT Java compiler takes Java code and compiles it into JavaScript—that’s all. It has some advanced rules for doing this, however. By defining GWT compile tasks into modules, the compiler can perform more analysis on the code as it’s processed, and branch into multiple compilation artifacts for different output targets. This means that when compiling a class, you can specify differing implementations based on known parameters. The obvious switch point is the user agent or client browser you’re targeting. This feature drives the core of GWT’s cross-browser compatibility.

Built on top of GWT’s intelligent compilation system is a cross-browser UI layer. The real magic here comes from implementing the UI elements in Java and then using a browser-specific implementation of the core DOM to build out the native browser elements as they’re needed by the higher-level Java layer. Whereas some Ajax libraries have a lot of focus on UI widgets, GWT is intended to provide a core of UI functionality that users and the community can build upon.
The GWT UI layer provides a wide variety of layout-related panels, data representation constructs such as Tree and Grid, a set of user input elements, and more. The 1.4 release of GWT began to expand the UI toolkit to include some new advanced elements, like a rich text editor and a suggest box. This release also started to include some great new optimized UI elements that draw from the power of the plugin-capable compiler, such as the ImageBundle.

The GWT shell allows you to test your application in a browser while executing the native Java bytecode. This gives you the ability to use all your favorite Java tools to inspect your application, including profilers, step-through debugging, and JTI-based monitors. This hosted mode browser, with an embedded Apache Tomcat server, is also what makes it possible to test your compiled JavaScript with JUnit.

First, GWT projects are defined in terms of modules, composed of resources, configuration, and source. The module configuration defines compile-time information about a project and specifies resources needed at runtime. Beyond configuration, modules also make possible a rich inheritance mechanism. Because of this capability, projects can be complete web applications, they can be of a pure library nature, or they can fall anywhere in between. One thing a module defines is the starting point for a project’s code, known as an entry point. Entry point classes are coded in Java and are referenced by a module definition and compiled to JavaScript. Modules themselves, and the entry points they define, are invoked through a <script> reference on an HTML page, known as a host page. Host pages invoke GWT projects and also support a few special <meta> tags that can be used to tweak things. At a high level, these are the three main components of a GWT project: a module configuration file, an entry point class, and an HTML host page.

Lastly , GWT is great in making project websites that uses Javascript. GWT borrows from the approaches that have come before it and takes things in a new direction, expanding the web development frontiers. All the while, GWT maintains the advantages of traditional compiled-language development by starting out from Java; and it adopts the successful component-oriented development approach, applying these concepts to the web tier in a responsive Ajax fashion.

In addition to starting with Java, GWT also embraces the parts of the web that have  worked well and allows developers and users to remain on familiar ground. This is an overlooked yet significant aspect of GWT. GWT doesn’t try to hide the web from you, just to achieve the moniker “rich web application.” Instead, GWT happily integrates with and uses HTML, JavaScript, and CSS.

p/s:- Some of the article is an excerpt taken from the book GWT In Practice written by Robert T. Cooper and Charlie E. Collins , published by Manning. Hope you guys enjoy reading it….








ScreenOS Cookbook…..

March 11, 2015

screenOS cookbook






ScreenOS is one of the operating system that has been used in Juniper Network switch and routers operating system. If you buy a switch or a Juniper’s router , you would like to check ScreenOS installed in it. ScreenOS is used to administer the traffic flow of network design  that uses OSPF , BGP , VPN , NAT , DHCP and so on…Recently , I just borrowed a ScreenOS Cookbook from the National Library (PNM) . It’s quite a good book to read if you’re planning to be a Network Administrator that uses Juniper’s switches and routers product line. Administering ScreenOS is quite easy and challenging , just like you administer the CISCO IOS Software in CISCO’s product line that consist of switch and router. We can use ScreenOS to administer firewall configuration , wireless , route mode and static routing , transparent mode and so on….

DHCP Server Maintenance.

You can use ScreenOS’s get commands to view a feature’s functionality. In the get interface wireless2 dhcp server command , the DHCP server is enabled and on , and is not using the next server option which allows configuration information to be shared among multiple DHCP servers. Also , the DHCP client will update information to the server component.

The get interface <interface name> dhcp server ip allocate command shows the allocated IPs per interface , as well as the Media Access Control (MAC) address and time remaining in the lease. As each interface can have its own DHCP settings , different ranges may be configured on the device. To reset the DHCP leases , use the clear dhcp server <interface name> ip command. You can use this command to clear all leases or just a particular IP address:


FIREWALL-A->clear dhcp erver wireless ip all

FIREWALL-A->get db str


Use get commands:

FIREWALL-A->get interface wireless2 dhcp server

FIREWALL-A->get interface wireless1 dhcp server option.

When the clear dhcp server <interface name> ip all command is issued , the flash:dhcpserv1.txt file is modified. This file is used to store DHCP lease information so that leases can survice a system reboot. When the file is modified, each interface that is not cleared has the lease information for that interface rewritten so as to preserve the information.

The get interface <interface name> dhcp server option command shows all options configured on the DHCP server for that interface , including custom options. When custom options are configured , each option appears in the command output with the name Custom , and the code in parentheses immediately following.

Configure DHCP Relay

FIREWALL-A->set interface ethernet2 dhcp relay service

FIREWALL-A->set interface ethernet2 dhcp relay server-name

FIREWALL-A->set address untrust DHCP_SVR_10.3.1.1

FIREWALL-A->set policy from untrust to trust DHCP_SVR_10.3.1.1 any dhcp-relay permit log

Juniper Network’s firewall system products , which include the NS5000 Series and the ISG Series , do not have DHCP server functionality built-in. As these devices are typically used to protect large-scale environtments , they are frequently sandwiched in between pairs of routers. Furthermore , DHCP servers are often already available and installed elsewhere  in the network. Occasionally , however , hosts requiring DHCP services are directly connected to the firewall.

To accommodate DHCP services for hosts that connect to the firewall as their gateway , you can set up DHCP relay. To configure DHCP relay , simply enable the DHCP relay service on the interface , and configure the server address to forward the DHCP messages.

If you want to send these messages across a tunnel , use the set interface <interface name> dhcp relay vpn command. Additionally , a policy which permits dhcp-relay from the server to the client side-in this case , from untrust to trust-is required.

You can verify that DHCP relay is enabled on the interface by using the get interface command:

FIREWALL-A->get int eth2

For more concise output , use the get interface <interface name> dhcp relay command:

FIREWALL-A->get int eth2 dhcp relay


p/s:- ScreenOS uses CLI like in CISCO IOS Software…We can manage the network connection and network design using ScreenOS. We can also uses multicast traffic through a transparent mode device and create a virtual systems.(Later in the last chapter).. Some of this article are excerpt taken from ScreenOS Cookbook by Stefan Brunner , Vik Davar , David Delcourt , Ken Draper , Joe Kelly & Sunil Wadhwa from O’reilly.




Managing NFS and NIS…..

January 29, 2015

managing nfs and nis






Recently , I just borrowed a book entitled Managing NFS and NIS from O’reilly Associates – written by Hal Stern. The book is quite impressive for a system administrator or system engineer who deals with NFS and NIS in a LINUX or UNIX operating system. NIS provides a distributed database system for common configuration files. NIS servers manage copies of the database files , and NIS client request information from the servers instead of using their own , local copies of these files. NFS is a distributed filesystem. An NFS server has one or more filesystems that are mounted by NFS clients ; to the NFS clients , the remote disks look like local disks.

NFS achieves the first level of transparency by defining a generic set of filesystem operations that are performed on a Virtual File System (VFS). The second level comes from the definition of virtual nodes, which are related to the more familiar Unix filesystem inode structures but hide the actual structure of the physical filesystem beneath them. The set of all procedures that can be performed on files is the vnode interface definition. The vnode and VFS specifications together define the NFS protocol. The Virtual File System allows a client system to access many different types of filesystems
as if they were all attached locally. VFS hides the differences in implementations under a consistent interface. On a Unix NFS client, the VFS interface makes all NFS filesystems look like Unix filesystems, even if they are exported from IBM MVS or Windows NT servers. The VFS interface is really nothing more than a switchboard for filesystem-and file-oriented operations.NFS is an RPC-based protocol, with a client-server relationship between the machine having the filesystem to be distributed and the machine wanting access to that filesystem. NFS kernel server threads run on the server and accept RPC calls from clients. These server threads are initiated by an nfsd daemon. NFS servers also run the mountd daemon to handle filesystem mount requests and some pathname translation. On an NFS client, asynchronous I/O threads (async threads) are usually run to improve NFS performance, but they are not required.

Each version of the NFS RPC protocol contains several procedures, each of which operates on either a file or a filesystem object. The basic procedures performed on an NFS server can be grouped into directory operations, file operations, link operations, and filesystem operations. Directory operations include mkdir and rmdir, which create and destroy directories like their Unix system call equivalents. readdir reads a directory, using an opaque directory pointer to perform sequential reads of the same directory. Other directory-oriented procedures are rename and remove, which operate on entries in a directory the same way the mv and rm commands do. create makes a new directory entry for a file.The NFS protocol is stateless, meaning that there is no need to maintain information about the protocol on the server. The client keeps track of all information required to send requests to the server, but the server has no information about previous NFS requests, or how various NFS requests relate to each other. Remember the differences between the TCP and UDP protocols: UDP is a stateless protocol that can lose packets or deliver them out of order; TCP is a stateful protocol that guarantees that packets arrive and are delivered in order. The hosts using TCP must remember connection state information to recognize when part of a transmission was lost.

NFS RPC requests are sent from a client to the server one at a time. A single client process will not issue another RPC call until the call in progress completes and has been acknowledged by the NFS server. In this respect NFS RPC calls are like system calls — a process cannot continue with the next system call until the current one completes. A single client host may have several RPC calls in progress at any time, coming from several processes, but each process ensures that its file operations are well ordered by waiting for their acknowledgements. Using the NFS async threads makes this a little more complicated, but for now it’s helpful to think of each process sending a stream of NFS requests, one at a time.

Lastly , managing NFS and NIS filesystem is quite a bit complicated task to do it. The system administrator or system engineer have to be very careful in designing the network file system. PC/NFS is used as a client-only implementation running the DOS operating system. There are also mail services that we can centralized using NFS and NIS. Overall , the book – Managing NFS and NIS is a good book to read…

p/s:- Some of the article are taken from the excerpt – Managing NFS and NIS – O’reilly Associates writen by Hal Stern.





Computer Forensic….

January 8, 2015








Computer Forensic is a new field in the IT industry. Nowadays , the subject and course computer forensic has been taught in lectures in Universities and Colleges. In Malaysia , Computer Forensic is a new field that has been introduced here in these days. Computer Forensic is basically is an investigation that been carried out to find evidence about criminal activities that can be represented in the court of law. The book entitled Computer Forensic for dummies – , I just borrowed it from the National Library – PNM.

Workplaces have disaster-recovery and business-continuity systems that perform automatic backups. Companies are required to retain business records for audit or litigation purposes. Even if you never saved a particular
file to the networked server, it might still be retained on multiple backup media somewhere. Instant, text, and voice messages exist in digital format and, therefore, are stored on the servers of your Internet service provider
(ISP), cell provider, or phone company. Although text messages are more transient than e-mail, messages are stored and backed up the same way. Recipients have copies that may also be stored and backed up.

Your job as a computer forensics investigator involves a series of processes to find, analyze, and preserve the relevant digital files or data for use as e-evidence. You perform those functions as part of a case. Each computer forensic case has a life cycle that starts with getting permission to invade someone else’s private property. You might enter into the case at a later stage in the life cycle. Taken to completion, the case ends in court where a correct verdict is made, unless something causes the case to terminate earlier.

The first step in any computer forensic investigation is to identify the type of media you’re working with. The various types of media you might encounter are described in this list:
1. Fixed storage device: Any device that you use to store data and that’s permanently attached to a computer is a fixed storage device. The type of storage device you’re probably most familiar with is the classic magnetic-media hard drive, which is inside almost every personal computer . Traditional hard drives are mechanisms that rotate disks coated with a magnetic material; however, new technology uses chip-based storage media known as the solid-state drive (SSD). It’s as though your thumb flash drive is 1,000 times larger than its current size!

2. Portable storage device: Most people consider floppy disks (remember those?) or flash memory drives, to be the only true portable storage devices, but any device that you can carry with you qualifies. iPods , MP3 players, mobile phones, and even some wristwatches are also portable storage devices. Unlike fixed storage, where most interfaces are standardized, mobile devices have different interfaces, which adds to the complexity of your case.

3.  Memory storage area: With the move from desktop computers to mobile devices, investigators are seeing increasingly more evidence that’s found only in memory. The obvious type of device is a mobile phone (such as the Apple iPhone) or personal digital assistant that often saves data only in volatile memory. After the battery dies, your data evidence also dies. Not-so-obvious places to find evidence in volatile memory are the RAM areas of regular computers and servers as well as some network devices.

4. Network storage device: With the growth of the Internet and the exponential increase in the power of network devices, data can be found on devices that until now haven’t held forensic data of any value. Devices such as routers , switches, and even wireless access points can now save possible forensic information  and even archive it for future access.

5. Memory card: In addition to using built-in RAM memory, many devices now use digital memory cards to add storage. Common types are SD and MMC flash cards. To read this type of memory device, you often have to use a multimedia card reader.

In conclusion , the field Computer Forensic is a good and interesting field to venture here in Malaysia. There are some companies that provide services in Computer Forensic field. Some uses operating system such as Backtrack 5 R2 or Hex Live CD to do forensic jobs. Encase and FTK can also help us to do computer forensic investigation. I also provide computer forensic services to my customer – PC Network Services. The future of computer forensic in Malaysia is really quit challenging and it also provide better job in forensic investigation.

p/s:- Some of the article is taken from the excerpt Computer Forensic for dummies – Wiley Publishing Inc. Author :  Linda Volonino and Reynaldo Anzaldua.





UNIX Network Programming….

December 16, 2014

unix network programming-index







Just borrowed a book from National Library (PNM) entitled UNIX Network Programming by Stevens , Fenner and Rudoff.  Previously I had post a blog about the book , that covers about TCP Client/Server. Now , I’m going to touch on the chapter about I/O  Multiplexing and Socket Options.

Nonblocking I/O Model
When we set a socket to be nonblocking, we are telling the kernel “when an I/O operation that I request cannot be completed without putting the process to sleep, do not put the process to
sleep, but return an error instead.” We will describe nonblocking I/O in Chapter 16.

The first three times that we call recvfrom, there is no data to return, so the kernel immediately returns an error of EWOULDBLOCK instead. The fourth time we call recvfrom, a datagram is ready, it is copied into our application buffer, and recvfrom returns successfully. We then process the data. When an application sits in a loop calling recvfrom on a nonblocking descriptor like this, it is called polling. The application is continually polling the kernel to see if some operation is ready. This is often a waste of CPU time, but this model is occasionally encountered, normally on systems dedicated to one function.

I/O Multiplexing Model
With I/O multiplexing, we call select or poll and block in one of these two system calls, instead of blocking in the actual I/O system call.

We block in a call to select, waiting for the datagram socket to be readable. When select returns that the socket is readable, we then call recvfrom to copy the datagram into our application buffer.

Comparing Figure 6.3 to Figure 6.1, there does not appear to be any advantage, and in fact, there is a slight disadvantage because using select requires two system calls instead of one. But the advantage in using select, which we will see later in this chapter, is that we can wait for more than one descriptor to be ready. Another closely related I/O model is to use multithreading with blocking I/O. That model very closely resembles the model described above, except that instead of using select to block on multiple file descriptors, the program uses multiple threads (one per file descriptor), and each thread is then free to call blocking system calls like recvfrom.

Signal-Driven I/O Model
We can also use signals, telling the kernel to notify us with the SIGIO signal when the descriptor is ready. We call this signal-driven I/O .

We first enable the socket for signal-driven I/O (as we will describe in Section 25.2) and install a signal handler using the sigaction system call. The return from this system call is immediate and our process continues; it is not blocked. When the datagram is ready to be read, the SIGIO signal is generated for our process. We can either read the datagram from the
signal handler by calling recvfrom and then notify the main loop that the data is ready to be processed (this is what we will do in Section 25.3), or we can notify the main loop and let it read the datagram.

Regardless of how we handle the signal, the advantage to this model is that we are not blocked while waiting for the datagram to arrive. The main loop can continue executing and just wait to be notified by the signal handler that either the data is ready to process or the datagram is ready to be read.

Asynchronous I/O Model
Asynchronous I/O is defined by the POSIX specification, and various differences in the realtime functions that appeared in the various standards which came together to form the current POSIX specification have been reconciled. In general, these functions work by telling the kernel to start the operation and to notify us when the entire operation (including the copy of the data from the kernel to our buffer) is complete. The main difference between this model and the
signal-driven I/O model in the previous section is that with signal-driven I/O, the kernel tells us when an I/O operation can be initiated, but with asynchronous I/O, the kernel tells us when an I/O operation is complete.

We call aio_read (the POSIX asynchronous I/O functions begin with aio_ or lio_) and pass the kernel the descriptor, buffer pointer, buffer size (the same three arguments for read), file offset (similar to lseek), and how to notify us when the entire operation is complete. This system call returns immediately and our process is not blocked while waiting for the I/O to complete. We assume in this example that we ask the kernel to generate some signal when the operation is complete. This signal is not generated until the data has been copied into our application buffer, which is different from the signal-driven I/O model. As of this writing, few systems support POSIX asynchronous I/O. We are not certain, for
example, if systems will support it for sockets. Our use of it here is as an example to compare against the signal-driven I/O model.

IPv4 Socket Options
These socket options are processed by IPv4 and have a level of IPPROTO_IP. We defer discussion of the multicasting socket options until Section 21.6.
IP_HDRINCL Socket Option
If this option is set for a raw IP socket (Chapter 28), we must build our own IP header for all the datagrams we send on the raw socket. Normally, the kernel builds the IP header for datagrams sent on a raw socket, but there are some applications (notably traceroute) that build their own IP header to override values that IP would place into certain header fields. When this option is set, we build a complete IP header, with the following exceptions:
IP always calculates and stores the IP header checksum.
If we set the IP identification field to 0, the kernel will set the field.
If the source IP address is INADDR_ANY, IP sets it to the primary IP address of the
outgoing interface.

Setting IP options is implementation-dependent. Some implementations take any IP options that were set using the IP_OPTIONS socket option and append these to the header that we build, while others require our header to also contain any desired IP options. Some fields must be in host byte order, and some in network byte order. This is
implementation-dependent, which makes writing raw packets with IP_HDRINCL not as portable as we’d like.
We show an example of this option in Section 29.7.

Well , I just touch about IPV4 socket options . For the rest of the chapter and info , you all have to borrow the book from National Library or buy it in a bookstore.

p/S:- For you all info , PIKOM PC Fair is just around the corner- starting on 19th December till 21th December , 2014. I’m looking forward to attend the fair that will be held at the KL Convention Center. Some of this article is an excerpt from the book UNIX Network Programming – written by Stevens , Fenner and Rudoff published by Addison-Wesley.



Get every new post delivered to your Inbox.