GWT In Practice……

March 30, 2015

GWT In Practice







Recently , just borrowed a book from National Library entitled GWT In Practice written by Robert T. Cooper and Charlie E. Collins . GWT stands for Google Web Toolkit.  GWT is a Java to JavaScript  cross-compiler . That is , it takes Java code and compiles it into JavaScript to be run in a browser.Other aspects that set GWT apart include a harness for debugging Java bytecode directly as it executes in a simulated browser environment, a set of core UI and layout widgets with which to build applications, a Remote Procedure Call (RPC) system for handling communications with a host web server, internationalization support, and testing mechanisms. Another of the reasons GWT is significant and is different from some other RIA offerings is that it provides tooling and testing support. GWT includes a powerful debugging shell that allows you to test and debug your code as it interacts with the native browser on your platform.

The testing support GWT provides is based on JUnit and on a few extensions the toolkit provides. Your GWT code can be tested as Java, from the shell. After you compile your code into JavaScript, the same test can be used again in that form by using further scaffolding provided by GWT. This allows you to test on various browser versions and, if desired, even on different platform and browser combinations.

The GWT Java compiler takes Java code and compiles it into JavaScript—that’s all. It has some advanced rules for doing this, however. By defining GWT compile tasks into modules, the compiler can perform more analysis on the code as it’s processed, and branch into multiple compilation artifacts for different output targets. This means that when compiling a class, you can specify differing implementations based on known parameters. The obvious switch point is the user agent or client browser you’re targeting. This feature drives the core of GWT’s cross-browser compatibility.

Built on top of GWT’s intelligent compilation system is a cross-browser UI layer. The real magic here comes from implementing the UI elements in Java and then using a browser-specific implementation of the core DOM to build out the native browser elements as they’re needed by the higher-level Java layer. Whereas some Ajax libraries have a lot of focus on UI widgets, GWT is intended to provide a core of UI functionality that users and the community can build upon.
The GWT UI layer provides a wide variety of layout-related panels, data representation constructs such as Tree and Grid, a set of user input elements, and more. The 1.4 release of GWT began to expand the UI toolkit to include some new advanced elements, like a rich text editor and a suggest box. This release also started to include some great new optimized UI elements that draw from the power of the plugin-capable compiler, such as the ImageBundle.

The GWT shell allows you to test your application in a browser while executing the native Java bytecode. This gives you the ability to use all your favorite Java tools to inspect your application, including profilers, step-through debugging, and JTI-based monitors. This hosted mode browser, with an embedded Apache Tomcat server, is also what makes it possible to test your compiled JavaScript with JUnit.

First, GWT projects are defined in terms of modules, composed of resources, configuration, and source. The module configuration defines compile-time information about a project and specifies resources needed at runtime. Beyond configuration, modules also make possible a rich inheritance mechanism. Because of this capability, projects can be complete web applications, they can be of a pure library nature, or they can fall anywhere in between. One thing a module defines is the starting point for a project’s code, known as an entry point. Entry point classes are coded in Java and are referenced by a module definition and compiled to JavaScript. Modules themselves, and the entry points they define, are invoked through a <script> reference on an HTML page, known as a host page. Host pages invoke GWT projects and also support a few special <meta> tags that can be used to tweak things. At a high level, these are the three main components of a GWT project: a module configuration file, an entry point class, and an HTML host page.

Lastly , GWT is great in making project websites that uses Javascript. GWT borrows from the approaches that have come before it and takes things in a new direction, expanding the web development frontiers. All the while, GWT maintains the advantages of traditional compiled-language development by starting out from Java; and it adopts the successful component-oriented development approach, applying these concepts to the web tier in a responsive Ajax fashion.

In addition to starting with Java, GWT also embraces the parts of the web that have  worked well and allows developers and users to remain on familiar ground. This is an overlooked yet significant aspect of GWT. GWT doesn’t try to hide the web from you, just to achieve the moniker “rich web application.” Instead, GWT happily integrates with and uses HTML, JavaScript, and CSS.

p/s:- Some of the article is an excerpt taken from the book GWT In Practice written by Robert T. Cooper and Charlie E. Collins , published by Manning. Hope you guys enjoy reading it….








ScreenOS Cookbook…..

March 11, 2015

screenOS cookbook






ScreenOS is one of the operating system that has been used in Juniper Network switch and routers operating system. If you buy a switch or a Juniper’s router , you would like to check ScreenOS installed in it. ScreenOS is used to administer the traffic flow of network design  that uses OSPF , BGP , VPN , NAT , DHCP and so on…Recently , I just borrowed a ScreenOS Cookbook from the National Library (PNM) . It’s quite a good book to read if you’re planning to be a Network Administrator that uses Juniper’s switches and routers product line. Administering ScreenOS is quite easy and challenging , just like you administer the CISCO IOS Software in CISCO’s product line that consist of switch and router. We can use ScreenOS to administer firewall configuration , wireless , route mode and static routing , transparent mode and so on….

DHCP Server Maintenance.

You can use ScreenOS’s get commands to view a feature’s functionality. In the get interface wireless2 dhcp server command , the DHCP server is enabled and on , and is not using the next server option which allows configuration information to be shared among multiple DHCP servers. Also , the DHCP client will update information to the server component.

The get interface <interface name> dhcp server ip allocate command shows the allocated IPs per interface , as well as the Media Access Control (MAC) address and time remaining in the lease. As each interface can have its own DHCP settings , different ranges may be configured on the device. To reset the DHCP leases , use the clear dhcp server <interface name> ip command. You can use this command to clear all leases or just a particular IP address:


FIREWALL-A->clear dhcp erver wireless ip all

FIREWALL-A->get db str


Use get commands:

FIREWALL-A->get interface wireless2 dhcp server

FIREWALL-A->get interface wireless1 dhcp server option.

When the clear dhcp server <interface name> ip all command is issued , the flash:dhcpserv1.txt file is modified. This file is used to store DHCP lease information so that leases can survice a system reboot. When the file is modified, each interface that is not cleared has the lease information for that interface rewritten so as to preserve the information.

The get interface <interface name> dhcp server option command shows all options configured on the DHCP server for that interface , including custom options. When custom options are configured , each option appears in the command output with the name Custom , and the code in parentheses immediately following.

Configure DHCP Relay

FIREWALL-A->set interface ethernet2 dhcp relay service

FIREWALL-A->set interface ethernet2 dhcp relay server-name

FIREWALL-A->set address untrust DHCP_SVR_10.3.1.1

FIREWALL-A->set policy from untrust to trust DHCP_SVR_10.3.1.1 any dhcp-relay permit log

Juniper Network’s firewall system products , which include the NS5000 Series and the ISG Series , do not have DHCP server functionality built-in. As these devices are typically used to protect large-scale environtments , they are frequently sandwiched in between pairs of routers. Furthermore , DHCP servers are often already available and installed elsewhere  in the network. Occasionally , however , hosts requiring DHCP services are directly connected to the firewall.

To accommodate DHCP services for hosts that connect to the firewall as their gateway , you can set up DHCP relay. To configure DHCP relay , simply enable the DHCP relay service on the interface , and configure the server address to forward the DHCP messages.

If you want to send these messages across a tunnel , use the set interface <interface name> dhcp relay vpn command. Additionally , a policy which permits dhcp-relay from the server to the client side-in this case , from untrust to trust-is required.

You can verify that DHCP relay is enabled on the interface by using the get interface command:

FIREWALL-A->get int eth2

For more concise output , use the get interface <interface name> dhcp relay command:

FIREWALL-A->get int eth2 dhcp relay


p/s:- ScreenOS uses CLI like in CISCO IOS Software…We can manage the network connection and network design using ScreenOS. We can also uses multicast traffic through a transparent mode device and create a virtual systems.(Later in the last chapter).. Some of this article are excerpt taken from ScreenOS Cookbook by Stefan Brunner , Vik Davar , David Delcourt , Ken Draper , Joe Kelly & Sunil Wadhwa from O’reilly.




Managing NFS and NIS…..

January 29, 2015

managing nfs and nis






Recently , I just borrowed a book entitled Managing NFS and NIS from O’reilly Associates – written by Hal Stern. The book is quite impressive for a system administrator or system engineer who deals with NFS and NIS in a LINUX or UNIX operating system. NIS provides a distributed database system for common configuration files. NIS servers manage copies of the database files , and NIS client request information from the servers instead of using their own , local copies of these files. NFS is a distributed filesystem. An NFS server has one or more filesystems that are mounted by NFS clients ; to the NFS clients , the remote disks look like local disks.

NFS achieves the first level of transparency by defining a generic set of filesystem operations that are performed on a Virtual File System (VFS). The second level comes from the definition of virtual nodes, which are related to the more familiar Unix filesystem inode structures but hide the actual structure of the physical filesystem beneath them. The set of all procedures that can be performed on files is the vnode interface definition. The vnode and VFS specifications together define the NFS protocol. The Virtual File System allows a client system to access many different types of filesystems
as if they were all attached locally. VFS hides the differences in implementations under a consistent interface. On a Unix NFS client, the VFS interface makes all NFS filesystems look like Unix filesystems, even if they are exported from IBM MVS or Windows NT servers. The VFS interface is really nothing more than a switchboard for filesystem-and file-oriented operations.NFS is an RPC-based protocol, with a client-server relationship between the machine having the filesystem to be distributed and the machine wanting access to that filesystem. NFS kernel server threads run on the server and accept RPC calls from clients. These server threads are initiated by an nfsd daemon. NFS servers also run the mountd daemon to handle filesystem mount requests and some pathname translation. On an NFS client, asynchronous I/O threads (async threads) are usually run to improve NFS performance, but they are not required.

Each version of the NFS RPC protocol contains several procedures, each of which operates on either a file or a filesystem object. The basic procedures performed on an NFS server can be grouped into directory operations, file operations, link operations, and filesystem operations. Directory operations include mkdir and rmdir, which create and destroy directories like their Unix system call equivalents. readdir reads a directory, using an opaque directory pointer to perform sequential reads of the same directory. Other directory-oriented procedures are rename and remove, which operate on entries in a directory the same way the mv and rm commands do. create makes a new directory entry for a file.The NFS protocol is stateless, meaning that there is no need to maintain information about the protocol on the server. The client keeps track of all information required to send requests to the server, but the server has no information about previous NFS requests, or how various NFS requests relate to each other. Remember the differences between the TCP and UDP protocols: UDP is a stateless protocol that can lose packets or deliver them out of order; TCP is a stateful protocol that guarantees that packets arrive and are delivered in order. The hosts using TCP must remember connection state information to recognize when part of a transmission was lost.

NFS RPC requests are sent from a client to the server one at a time. A single client process will not issue another RPC call until the call in progress completes and has been acknowledged by the NFS server. In this respect NFS RPC calls are like system calls — a process cannot continue with the next system call until the current one completes. A single client host may have several RPC calls in progress at any time, coming from several processes, but each process ensures that its file operations are well ordered by waiting for their acknowledgements. Using the NFS async threads makes this a little more complicated, but for now it’s helpful to think of each process sending a stream of NFS requests, one at a time.

Lastly , managing NFS and NIS filesystem is quite a bit complicated task to do it. The system administrator or system engineer have to be very careful in designing the network file system. PC/NFS is used as a client-only implementation running the DOS operating system. There are also mail services that we can centralized using NFS and NIS. Overall , the book – Managing NFS and NIS is a good book to read…

p/s:- Some of the article are taken from the excerpt – Managing NFS and NIS – O’reilly Associates writen by Hal Stern.





Computer Forensic….

January 8, 2015








Computer Forensic is a new field in the IT industry. Nowadays , the subject and course computer forensic has been taught in lectures in Universities and Colleges. In Malaysia , Computer Forensic is a new field that has been introduced here in these days. Computer Forensic is basically is an investigation that been carried out to find evidence about criminal activities that can be represented in the court of law. The book entitled Computer Forensic for dummies – , I just borrowed it from the National Library – PNM.

Workplaces have disaster-recovery and business-continuity systems that perform automatic backups. Companies are required to retain business records for audit or litigation purposes. Even if you never saved a particular
file to the networked server, it might still be retained on multiple backup media somewhere. Instant, text, and voice messages exist in digital format and, therefore, are stored on the servers of your Internet service provider
(ISP), cell provider, or phone company. Although text messages are more transient than e-mail, messages are stored and backed up the same way. Recipients have copies that may also be stored and backed up.

Your job as a computer forensics investigator involves a series of processes to find, analyze, and preserve the relevant digital files or data for use as e-evidence. You perform those functions as part of a case. Each computer forensic case has a life cycle that starts with getting permission to invade someone else’s private property. You might enter into the case at a later stage in the life cycle. Taken to completion, the case ends in court where a correct verdict is made, unless something causes the case to terminate earlier.

The first step in any computer forensic investigation is to identify the type of media you’re working with. The various types of media you might encounter are described in this list:
1. Fixed storage device: Any device that you use to store data and that’s permanently attached to a computer is a fixed storage device. The type of storage device you’re probably most familiar with is the classic magnetic-media hard drive, which is inside almost every personal computer . Traditional hard drives are mechanisms that rotate disks coated with a magnetic material; however, new technology uses chip-based storage media known as the solid-state drive (SSD). It’s as though your thumb flash drive is 1,000 times larger than its current size!

2. Portable storage device: Most people consider floppy disks (remember those?) or flash memory drives, to be the only true portable storage devices, but any device that you can carry with you qualifies. iPods , MP3 players, mobile phones, and even some wristwatches are also portable storage devices. Unlike fixed storage, where most interfaces are standardized, mobile devices have different interfaces, which adds to the complexity of your case.

3.  Memory storage area: With the move from desktop computers to mobile devices, investigators are seeing increasingly more evidence that’s found only in memory. The obvious type of device is a mobile phone (such as the Apple iPhone) or personal digital assistant that often saves data only in volatile memory. After the battery dies, your data evidence also dies. Not-so-obvious places to find evidence in volatile memory are the RAM areas of regular computers and servers as well as some network devices.

4. Network storage device: With the growth of the Internet and the exponential increase in the power of network devices, data can be found on devices that until now haven’t held forensic data of any value. Devices such as routers , switches, and even wireless access points can now save possible forensic information  and even archive it for future access.

5. Memory card: In addition to using built-in RAM memory, many devices now use digital memory cards to add storage. Common types are SD and MMC flash cards. To read this type of memory device, you often have to use a multimedia card reader.

In conclusion , the field Computer Forensic is a good and interesting field to venture here in Malaysia. There are some companies that provide services in Computer Forensic field. Some uses operating system such as Backtrack 5 R2 or Hex Live CD to do forensic jobs. Encase and FTK can also help us to do computer forensic investigation. I also provide computer forensic services to my customer – PC Network Services. The future of computer forensic in Malaysia is really quit challenging and it also provide better job in forensic investigation.

p/s:- Some of the article is taken from the excerpt Computer Forensic for dummies – Wiley Publishing Inc. Author :  Linda Volonino and Reynaldo Anzaldua.





UNIX Network Programming….

December 16, 2014

unix network programming-index







Just borrowed a book from National Library (PNM) entitled UNIX Network Programming by Stevens , Fenner and Rudoff.  Previously I had post a blog about the book , that covers about TCP Client/Server. Now , I’m going to touch on the chapter about I/O  Multiplexing and Socket Options.

Nonblocking I/O Model
When we set a socket to be nonblocking, we are telling the kernel “when an I/O operation that I request cannot be completed without putting the process to sleep, do not put the process to
sleep, but return an error instead.” We will describe nonblocking I/O in Chapter 16.

The first three times that we call recvfrom, there is no data to return, so the kernel immediately returns an error of EWOULDBLOCK instead. The fourth time we call recvfrom, a datagram is ready, it is copied into our application buffer, and recvfrom returns successfully. We then process the data. When an application sits in a loop calling recvfrom on a nonblocking descriptor like this, it is called polling. The application is continually polling the kernel to see if some operation is ready. This is often a waste of CPU time, but this model is occasionally encountered, normally on systems dedicated to one function.

I/O Multiplexing Model
With I/O multiplexing, we call select or poll and block in one of these two system calls, instead of blocking in the actual I/O system call.

We block in a call to select, waiting for the datagram socket to be readable. When select returns that the socket is readable, we then call recvfrom to copy the datagram into our application buffer.

Comparing Figure 6.3 to Figure 6.1, there does not appear to be any advantage, and in fact, there is a slight disadvantage because using select requires two system calls instead of one. But the advantage in using select, which we will see later in this chapter, is that we can wait for more than one descriptor to be ready. Another closely related I/O model is to use multithreading with blocking I/O. That model very closely resembles the model described above, except that instead of using select to block on multiple file descriptors, the program uses multiple threads (one per file descriptor), and each thread is then free to call blocking system calls like recvfrom.

Signal-Driven I/O Model
We can also use signals, telling the kernel to notify us with the SIGIO signal when the descriptor is ready. We call this signal-driven I/O .

We first enable the socket for signal-driven I/O (as we will describe in Section 25.2) and install a signal handler using the sigaction system call. The return from this system call is immediate and our process continues; it is not blocked. When the datagram is ready to be read, the SIGIO signal is generated for our process. We can either read the datagram from the
signal handler by calling recvfrom and then notify the main loop that the data is ready to be processed (this is what we will do in Section 25.3), or we can notify the main loop and let it read the datagram.

Regardless of how we handle the signal, the advantage to this model is that we are not blocked while waiting for the datagram to arrive. The main loop can continue executing and just wait to be notified by the signal handler that either the data is ready to process or the datagram is ready to be read.

Asynchronous I/O Model
Asynchronous I/O is defined by the POSIX specification, and various differences in the realtime functions that appeared in the various standards which came together to form the current POSIX specification have been reconciled. In general, these functions work by telling the kernel to start the operation and to notify us when the entire operation (including the copy of the data from the kernel to our buffer) is complete. The main difference between this model and the
signal-driven I/O model in the previous section is that with signal-driven I/O, the kernel tells us when an I/O operation can be initiated, but with asynchronous I/O, the kernel tells us when an I/O operation is complete.

We call aio_read (the POSIX asynchronous I/O functions begin with aio_ or lio_) and pass the kernel the descriptor, buffer pointer, buffer size (the same three arguments for read), file offset (similar to lseek), and how to notify us when the entire operation is complete. This system call returns immediately and our process is not blocked while waiting for the I/O to complete. We assume in this example that we ask the kernel to generate some signal when the operation is complete. This signal is not generated until the data has been copied into our application buffer, which is different from the signal-driven I/O model. As of this writing, few systems support POSIX asynchronous I/O. We are not certain, for
example, if systems will support it for sockets. Our use of it here is as an example to compare against the signal-driven I/O model.

IPv4 Socket Options
These socket options are processed by IPv4 and have a level of IPPROTO_IP. We defer discussion of the multicasting socket options until Section 21.6.
IP_HDRINCL Socket Option
If this option is set for a raw IP socket (Chapter 28), we must build our own IP header for all the datagrams we send on the raw socket. Normally, the kernel builds the IP header for datagrams sent on a raw socket, but there are some applications (notably traceroute) that build their own IP header to override values that IP would place into certain header fields. When this option is set, we build a complete IP header, with the following exceptions:
IP always calculates and stores the IP header checksum.
If we set the IP identification field to 0, the kernel will set the field.
If the source IP address is INADDR_ANY, IP sets it to the primary IP address of the
outgoing interface.

Setting IP options is implementation-dependent. Some implementations take any IP options that were set using the IP_OPTIONS socket option and append these to the header that we build, while others require our header to also contain any desired IP options. Some fields must be in host byte order, and some in network byte order. This is
implementation-dependent, which makes writing raw packets with IP_HDRINCL not as portable as we’d like.
We show an example of this option in Section 29.7.

Well , I just touch about IPV4 socket options . For the rest of the chapter and info , you all have to borrow the book from National Library or buy it in a bookstore.

p/S:- For you all info , PIKOM PC Fair is just around the corner- starting on 19th December till 21th December , 2014. I’m looking forward to attend the fair that will be held at the KL Convention Center. Some of this article is an excerpt from the book UNIX Network Programming – written by Stevens , Fenner and Rudoff published by Addison-Wesley.



VMware Infrastructure 3……

November 26, 2014


Virtual machines are just like physical machines. You can log on to them ; and they have BIOS , hard disks, memory , CPUs , operating systems and applications. In fact , if you connect remotely to a machine , you’ll never know what it’s virtual unless someone tells you. Virtual machines work and behave just like physical machines. Even the machines themselves don’t even know they are virtual.

Got a book from National Library (PNM) about VMware Infrastructure 3 – for dummies. To fully understand your storage options and make informed decisions , you need to understand SCSI. Like the OSI networking model , SCSI makes use of several layers offering different functionality. The official name for this model is the SCSI Architecture Model (SAM). You’ll need to decide whether or not you want to boot ESX from a SAN before you install the server. You can boot from SAN with both FIBRE Channel and SCSI. The only iSCSI catch is that you need to use a hardware initiator instead of a software initiator. Switched Fibre Channel SANs were the first SAN technology fully supported by VMWare configuration with a switched Fibre Channel SAN. iSCSI is a newer technology. However , like Fibre Channel , iSCSI puts SCSI commands and data blocks into network frames. The difference is that iSCSI uses TCP/IP instead of Fibre Channel Protocol (FCP). To frame data , iSCSI needs a protocol initiator. This can be software based – Microsoft’s iSCSI Software Initiator or hardware based – TOE card. iSCSI nodes are iSCSI devices. They can be initiators , targets , or both. A NAS device is typically a plug-and play storage device that supports  Network File System (NFS) – Open Source or Server Message block (SMB) – SMB is Windows Networking. VMWare uses the NFS protocol because it’s more of an open standard than SMB. Most of VMWare’s datacenter features , such as VMotion and DRS , work with NAS. However , VCB does not.

VMkernel loads in high memory and controls all your hardware. It is an abstraction layer (hiding the implementation details of hardware sharing ) that virtualizes hardware . The VMkernel assumes that all the hardware in your system is functioning properly. Any faulty hardware can cause it to crash , yielding the Purple Screen of Death (PSOD). Additionally , the VMkernel controls all scheduling for the ESX machine. This includes virtual machines and the Service Console. You can install ESX on an Intel processor that is Xeon and later ; or an AMD Opteron processor in 32 bit mode. You also need at least 2GB of RAM and 560MB of disk space. Of course , the more CPUs , RAM , and disk storage you have , the more virtual machines you can support. VMwareHigh Availability (HA) is supported only experimentally in ESXi. If it does not work , you will need to manually start virtual machines on another ESX or ESXi host if the one they were running on fails. VMware Infrastucture Client (VIC) is your one-stop-shopping for all your VMware Infrastructure 3 needs. VIC can log in to and manage ESX hosts directly , or as a proxy through VirtualCenter.

Your virtual machines connect to virtual switches. Virtual switches , in turn , connect to NICs in your ESX host. And the NICs connect to your physical network. Virtual switches perform 3 differnet functions for an ESX host. Each function is considered a different connection type or port.

1. Virtual machines.

2. VMkernel.

3. Service Console.

Load balancing offers 3 different ways to pick which uplink adapter to use for outgoing traffic. You can choose from a virtual port-based algorithm , a MAC address-based algorithm, an IP address-based algorithm , and an explicit failover order.Each has its tradeoffs.

p/s:- This is some excerpt taken from the book VMware Infrastructure 3 – for dummies by William J.Lowe – Wiley Publishing Inc.

– Just to make a note…PIKOM PC Fair will be held at KL Convention Center 0n 19th-21th December 2014…I will be attending the PC Fair next month….


Fuzzing…Brute Force Vulnerability Discovery……

October 24, 2014


This week I’m writing about Fuzzing … Brute Force Vulnerability Discovery….Just got a book from the National Library (PNM).  Fuzzing is a method for discovering faults in software by providing unexpected input and monitoring for exceptions. It is typically an automated or semiautomated process that involves repeatedly manipulating and supplying data to target software for processing. Fuzzing has evolved into one of today’s most effective approaches to test software security. To “fuzz” , you attach a program’s inputs to a source random data , and then systematically identify the failures that arise. Hackers have relied on fuzzing for years.  Renowned fuzzing experts show you how to use fuzzing to reveal weaknesses in your software before someone else does.

Pregenerated Test Cases – this is the method taken by the PROTOS framework. Test case develeopment begins with studying a particular specification to understand all supported data structure and the acceptable value ranges for each. Hard coded packets or files are then generated that test boundary conditions or violate the specification altogether. Those test cases can then be used to test how accurately the specification has been implemented on target systems.Creating test cases can require considerable work up front , but has the advantage of being able to be reused to uniformly test multiple implementations of the same protocol or file format.

Manual Protocol Mutation Testing – there is no automated fuzzer involved. The researcher is the fuzzer. After loading up the target application , the researcher simply enters inappropriate data in an attempt to crash the server or induce some undesirable behaviour. This class of fuzzing is most often applied to Web applications.

Mutation or Brute Force Testing – a fuzzer that starts with a valid sample of a protocol or data format and continually mangles every individual byte , word , dword , or string within that data packet or file. This is great early approach because it requires very little up-front research and implementing a basic brute force fuzzer is relatively straightforward.

Network Protocol Fuzzing – I would like to touch this chapter- chapter 14 regarding network protocol fuzzing that requires identifying the attack surface , mutating or generating error-inducing fuzz values , transmitting those fuzz values to a target , and monitoring that target for faults. If your fuzzer communicates with its target over some form of socket , then it is a network protocol fuzzer.

The book contains chapter about Fuzzer Methods and Fuzzer Types , Data Representation and Analysis , Requirements for Effective Fuzzing , Automation and Data Generation ,Environtment Variable and Argument Fuzzing , and so on..from Chapter 8 to 26. I strongly recommend people in Software Engineering , Malware Expert field to read this book.

p/s:- Excerpt taken from the book – Fuzzing…Brute Force Vulnerability Discovery – by Addison Wesly.


Get every new post delivered to your Inbox.