h1

Managing NFS and NIS…..

January 29, 2015

managing nfs and nis

 

 

 

 

 

Recently , I just borrowed a book entitled Managing NFS and NIS from O’reilly Associates – written by Hal Stern. The book is quite impressive for a system administrator or system engineer who deals with NFS and NIS in a LINUX or UNIX operating system. NIS provides a distributed database system for common configuration files. NIS servers manage copies of the database files , and NIS client request information from the servers instead of using their own , local copies of these files. NFS is a distributed filesystem. An NFS server has one or more filesystems that are mounted by NFS clients ; to the NFS clients , the remote disks look like local disks.

NFS achieves the first level of transparency by defining a generic set of filesystem operations that are performed on a Virtual File System (VFS). The second level comes from the definition of virtual nodes, which are related to the more familiar Unix filesystem inode structures but hide the actual structure of the physical filesystem beneath them. The set of all procedures that can be performed on files is the vnode interface definition. The vnode and VFS specifications together define the NFS protocol. The Virtual File System allows a client system to access many different types of filesystems
as if they were all attached locally. VFS hides the differences in implementations under a consistent interface. On a Unix NFS client, the VFS interface makes all NFS filesystems look like Unix filesystems, even if they are exported from IBM MVS or Windows NT servers. The VFS interface is really nothing more than a switchboard for filesystem-and file-oriented operations.NFS is an RPC-based protocol, with a client-server relationship between the machine having the filesystem to be distributed and the machine wanting access to that filesystem. NFS kernel server threads run on the server and accept RPC calls from clients. These server threads are initiated by an nfsd daemon. NFS servers also run the mountd daemon to handle filesystem mount requests and some pathname translation. On an NFS client, asynchronous I/O threads (async threads) are usually run to improve NFS performance, but they are not required.

Each version of the NFS RPC protocol contains several procedures, each of which operates on either a file or a filesystem object. The basic procedures performed on an NFS server can be grouped into directory operations, file operations, link operations, and filesystem operations. Directory operations include mkdir and rmdir, which create and destroy directories like their Unix system call equivalents. readdir reads a directory, using an opaque directory pointer to perform sequential reads of the same directory. Other directory-oriented procedures are rename and remove, which operate on entries in a directory the same way the mv and rm commands do. create makes a new directory entry for a file.The NFS protocol is stateless, meaning that there is no need to maintain information about the protocol on the server. The client keeps track of all information required to send requests to the server, but the server has no information about previous NFS requests, or how various NFS requests relate to each other. Remember the differences between the TCP and UDP protocols: UDP is a stateless protocol that can lose packets or deliver them out of order; TCP is a stateful protocol that guarantees that packets arrive and are delivered in order. The hosts using TCP must remember connection state information to recognize when part of a transmission was lost.

NFS RPC requests are sent from a client to the server one at a time. A single client process will not issue another RPC call until the call in progress completes and has been acknowledged by the NFS server. In this respect NFS RPC calls are like system calls — a process cannot continue with the next system call until the current one completes. A single client host may have several RPC calls in progress at any time, coming from several processes, but each process ensures that its file operations are well ordered by waiting for their acknowledgements. Using the NFS async threads makes this a little more complicated, but for now it’s helpful to think of each process sending a stream of NFS requests, one at a time.

Lastly , managing NFS and NIS filesystem is quite a bit complicated task to do it. The system administrator or system engineer have to be very careful in designing the network file system. PC/NFS is used as a client-only implementation running the DOS operating system. There are also mail services that we can centralized using NFS and NIS. Overall , the book – Managing NFS and NIS is a good book to read…

p/s:- Some of the article are taken from the excerpt – Managing NFS and NIS – O’reilly Associates writen by Hal Stern.

 

 

 

h1

Computer Forensic….

January 8, 2015

computer-forensic-images

 

 

 

 

 

 

Computer Forensic is a new field in the IT industry. Nowadays , the subject and course computer forensic has been taught in lectures in Universities and Colleges. In Malaysia , Computer Forensic is a new field that has been introduced here in these days. Computer Forensic is basically is an investigation that been carried out to find evidence about criminal activities that can be represented in the court of law. The book entitled Computer Forensic for dummies – , I just borrowed it from the National Library – PNM.

Workplaces have disaster-recovery and business-continuity systems that perform automatic backups. Companies are required to retain business records for audit or litigation purposes. Even if you never saved a particular
file to the networked server, it might still be retained on multiple backup media somewhere. Instant, text, and voice messages exist in digital format and, therefore, are stored on the servers of your Internet service provider
(ISP), cell provider, or phone company. Although text messages are more transient than e-mail, messages are stored and backed up the same way. Recipients have copies that may also be stored and backed up.

Your job as a computer forensics investigator involves a series of processes to find, analyze, and preserve the relevant digital files or data for use as e-evidence. You perform those functions as part of a case. Each computer forensic case has a life cycle that starts with getting permission to invade someone else’s private property. You might enter into the case at a later stage in the life cycle. Taken to completion, the case ends in court where a correct verdict is made, unless something causes the case to terminate earlier.

The first step in any computer forensic investigation is to identify the type of media you’re working with. The various types of media you might encounter are described in this list:
1. Fixed storage device: Any device that you use to store data and that’s permanently attached to a computer is a fixed storage device. The type of storage device you’re probably most familiar with is the classic magnetic-media hard drive, which is inside almost every personal computer . Traditional hard drives are mechanisms that rotate disks coated with a magnetic material; however, new technology uses chip-based storage media known as the solid-state drive (SSD). It’s as though your thumb flash drive is 1,000 times larger than its current size!

2. Portable storage device: Most people consider floppy disks (remember those?) or flash memory drives, to be the only true portable storage devices, but any device that you can carry with you qualifies. iPods , MP3 players, mobile phones, and even some wristwatches are also portable storage devices. Unlike fixed storage, where most interfaces are standardized, mobile devices have different interfaces, which adds to the complexity of your case.

3.  Memory storage area: With the move from desktop computers to mobile devices, investigators are seeing increasingly more evidence that’s found only in memory. The obvious type of device is a mobile phone (such as the Apple iPhone) or personal digital assistant that often saves data only in volatile memory. After the battery dies, your data evidence also dies. Not-so-obvious places to find evidence in volatile memory are the RAM areas of regular computers and servers as well as some network devices.

4. Network storage device: With the growth of the Internet and the exponential increase in the power of network devices, data can be found on devices that until now haven’t held forensic data of any value. Devices such as routers , switches, and even wireless access points can now save possible forensic information  and even archive it for future access.

5. Memory card: In addition to using built-in RAM memory, many devices now use digital memory cards to add storage. Common types are SD and MMC flash cards. To read this type of memory device, you often have to use a multimedia card reader.

In conclusion , the field Computer Forensic is a good and interesting field to venture here in Malaysia. There are some companies that provide services in Computer Forensic field. Some uses operating system such as Backtrack 5 R2 or Hex Live CD to do forensic jobs. Encase and FTK can also help us to do computer forensic investigation. I also provide computer forensic services to my customer – PC Network Services. The future of computer forensic in Malaysia is really quit challenging and it also provide better job in forensic investigation.

p/s:- Some of the article is taken from the excerpt Computer Forensic for dummies – Wiley Publishing Inc. Author :  Linda Volonino and Reynaldo Anzaldua.

 

 

 

h1

UNIX Network Programming….

December 16, 2014

unix network programming-index

 

 

 

 

 

 

Just borrowed a book from National Library (PNM) entitled UNIX Network Programming by Stevens , Fenner and Rudoff.  Previously I had post a blog about the book , that covers about TCP Client/Server. Now , I’m going to touch on the chapter about I/O  Multiplexing and Socket Options.

Nonblocking I/O Model
When we set a socket to be nonblocking, we are telling the kernel “when an I/O operation that I request cannot be completed without putting the process to sleep, do not put the process to
sleep, but return an error instead.” We will describe nonblocking I/O in Chapter 16.

The first three times that we call recvfrom, there is no data to return, so the kernel immediately returns an error of EWOULDBLOCK instead. The fourth time we call recvfrom, a datagram is ready, it is copied into our application buffer, and recvfrom returns successfully. We then process the data. When an application sits in a loop calling recvfrom on a nonblocking descriptor like this, it is called polling. The application is continually polling the kernel to see if some operation is ready. This is often a waste of CPU time, but this model is occasionally encountered, normally on systems dedicated to one function.

I/O Multiplexing Model
With I/O multiplexing, we call select or poll and block in one of these two system calls, instead of blocking in the actual I/O system call.

We block in a call to select, waiting for the datagram socket to be readable. When select returns that the socket is readable, we then call recvfrom to copy the datagram into our application buffer.

Comparing Figure 6.3 to Figure 6.1, there does not appear to be any advantage, and in fact, there is a slight disadvantage because using select requires two system calls instead of one. But the advantage in using select, which we will see later in this chapter, is that we can wait for more than one descriptor to be ready. Another closely related I/O model is to use multithreading with blocking I/O. That model very closely resembles the model described above, except that instead of using select to block on multiple file descriptors, the program uses multiple threads (one per file descriptor), and each thread is then free to call blocking system calls like recvfrom.

Signal-Driven I/O Model
We can also use signals, telling the kernel to notify us with the SIGIO signal when the descriptor is ready. We call this signal-driven I/O .

We first enable the socket for signal-driven I/O (as we will describe in Section 25.2) and install a signal handler using the sigaction system call. The return from this system call is immediate and our process continues; it is not blocked. When the datagram is ready to be read, the SIGIO signal is generated for our process. We can either read the datagram from the
signal handler by calling recvfrom and then notify the main loop that the data is ready to be processed (this is what we will do in Section 25.3), or we can notify the main loop and let it read the datagram.

Regardless of how we handle the signal, the advantage to this model is that we are not blocked while waiting for the datagram to arrive. The main loop can continue executing and just wait to be notified by the signal handler that either the data is ready to process or the datagram is ready to be read.

Asynchronous I/O Model
Asynchronous I/O is defined by the POSIX specification, and various differences in the realtime functions that appeared in the various standards which came together to form the current POSIX specification have been reconciled. In general, these functions work by telling the kernel to start the operation and to notify us when the entire operation (including the copy of the data from the kernel to our buffer) is complete. The main difference between this model and the
signal-driven I/O model in the previous section is that with signal-driven I/O, the kernel tells us when an I/O operation can be initiated, but with asynchronous I/O, the kernel tells us when an I/O operation is complete.

We call aio_read (the POSIX asynchronous I/O functions begin with aio_ or lio_) and pass the kernel the descriptor, buffer pointer, buffer size (the same three arguments for read), file offset (similar to lseek), and how to notify us when the entire operation is complete. This system call returns immediately and our process is not blocked while waiting for the I/O to complete. We assume in this example that we ask the kernel to generate some signal when the operation is complete. This signal is not generated until the data has been copied into our application buffer, which is different from the signal-driven I/O model. As of this writing, few systems support POSIX asynchronous I/O. We are not certain, for
example, if systems will support it for sockets. Our use of it here is as an example to compare against the signal-driven I/O model.

IPv4 Socket Options
These socket options are processed by IPv4 and have a level of IPPROTO_IP. We defer discussion of the multicasting socket options until Section 21.6.
IP_HDRINCL Socket Option
If this option is set for a raw IP socket (Chapter 28), we must build our own IP header for all the datagrams we send on the raw socket. Normally, the kernel builds the IP header for datagrams sent on a raw socket, but there are some applications (notably traceroute) that build their own IP header to override values that IP would place into certain header fields. When this option is set, we build a complete IP header, with the following exceptions:
IP always calculates and stores the IP header checksum.
If we set the IP identification field to 0, the kernel will set the field.
If the source IP address is INADDR_ANY, IP sets it to the primary IP address of the
outgoing interface.

Setting IP options is implementation-dependent. Some implementations take any IP options that were set using the IP_OPTIONS socket option and append these to the header that we build, while others require our header to also contain any desired IP options. Some fields must be in host byte order, and some in network byte order. This is
implementation-dependent, which makes writing raw packets with IP_HDRINCL not as portable as we’d like.
We show an example of this option in Section 29.7.

Well , I just touch about IPV4 socket options . For the rest of the chapter and info , you all have to borrow the book from National Library or buy it in a bookstore.

p/S:- For you all info , PIKOM PC Fair is just around the corner- starting on 19th December till 21th December , 2014. I’m looking forward to attend the fair that will be held at the KL Convention Center. Some of this article is an excerpt from the book UNIX Network Programming – written by Stevens , Fenner and Rudoff published by Addison-Wesley.

 

h1

VMware Infrastructure 3……

November 26, 2014

virtualDMZ

Virtual machines are just like physical machines. You can log on to them ; and they have BIOS , hard disks, memory , CPUs , operating systems and applications. In fact , if you connect remotely to a machine , you’ll never know what it’s virtual unless someone tells you. Virtual machines work and behave just like physical machines. Even the machines themselves don’t even know they are virtual.

Got a book from National Library (PNM) about VMware Infrastructure 3 – for dummies. To fully understand your storage options and make informed decisions , you need to understand SCSI. Like the OSI networking model , SCSI makes use of several layers offering different functionality. The official name for this model is the SCSI Architecture Model (SAM). You’ll need to decide whether or not you want to boot ESX from a SAN before you install the server. You can boot from SAN with both FIBRE Channel and SCSI. The only iSCSI catch is that you need to use a hardware initiator instead of a software initiator. Switched Fibre Channel SANs were the first SAN technology fully supported by VMWare configuration with a switched Fibre Channel SAN. iSCSI is a newer technology. However , like Fibre Channel , iSCSI puts SCSI commands and data blocks into network frames. The difference is that iSCSI uses TCP/IP instead of Fibre Channel Protocol (FCP). To frame data , iSCSI needs a protocol initiator. This can be software based – Microsoft’s iSCSI Software Initiator or hardware based – TOE card. iSCSI nodes are iSCSI devices. They can be initiators , targets , or both. A NAS device is typically a plug-and play storage device that supports  Network File System (NFS) – Open Source or Server Message block (SMB) – SMB is Windows Networking. VMWare uses the NFS protocol because it’s more of an open standard than SMB. Most of VMWare’s datacenter features , such as VMotion and DRS , work with NAS. However , VCB does not.

VMkernel loads in high memory and controls all your hardware. It is an abstraction layer (hiding the implementation details of hardware sharing ) that virtualizes hardware . The VMkernel assumes that all the hardware in your system is functioning properly. Any faulty hardware can cause it to crash , yielding the Purple Screen of Death (PSOD). Additionally , the VMkernel controls all scheduling for the ESX machine. This includes virtual machines and the Service Console. You can install ESX on an Intel processor that is Xeon and later ; or an AMD Opteron processor in 32 bit mode. You also need at least 2GB of RAM and 560MB of disk space. Of course , the more CPUs , RAM , and disk storage you have , the more virtual machines you can support. VMwareHigh Availability (HA) is supported only experimentally in ESXi. If it does not work , you will need to manually start virtual machines on another ESX or ESXi host if the one they were running on fails. VMware Infrastucture Client (VIC) is your one-stop-shopping for all your VMware Infrastructure 3 needs. VIC can log in to and manage ESX hosts directly , or as a proxy through VirtualCenter.

Your virtual machines connect to virtual switches. Virtual switches , in turn , connect to NICs in your ESX host. And the NICs connect to your physical network. Virtual switches perform 3 differnet functions for an ESX host. Each function is considered a different connection type or port.

1. Virtual machines.

2. VMkernel.

3. Service Console.

Load balancing offers 3 different ways to pick which uplink adapter to use for outgoing traffic. You can choose from a virtual port-based algorithm , a MAC address-based algorithm, an IP address-based algorithm , and an explicit failover order.Each has its tradeoffs.

p/s:- This is some excerpt taken from the book VMware Infrastructure 3 – for dummies by William J.Lowe – Wiley Publishing Inc.

– Just to make a note…PIKOM PC Fair will be held at KL Convention Center 0n 19th-21th December 2014…I will be attending the PC Fair next month….

h1

Fuzzing…Brute Force Vulnerability Discovery……

October 24, 2014

fuzzzing

This week I’m writing about Fuzzing … Brute Force Vulnerability Discovery….Just got a book from the National Library (PNM).  Fuzzing is a method for discovering faults in software by providing unexpected input and monitoring for exceptions. It is typically an automated or semiautomated process that involves repeatedly manipulating and supplying data to target software for processing. Fuzzing has evolved into one of today’s most effective approaches to test software security. To “fuzz” , you attach a program’s inputs to a source random data , and then systematically identify the failures that arise. Hackers have relied on fuzzing for years.  Renowned fuzzing experts show you how to use fuzzing to reveal weaknesses in your software before someone else does.

Pregenerated Test Cases – this is the method taken by the PROTOS framework. Test case develeopment begins with studying a particular specification to understand all supported data structure and the acceptable value ranges for each. Hard coded packets or files are then generated that test boundary conditions or violate the specification altogether. Those test cases can then be used to test how accurately the specification has been implemented on target systems.Creating test cases can require considerable work up front , but has the advantage of being able to be reused to uniformly test multiple implementations of the same protocol or file format.

Manual Protocol Mutation Testing – there is no automated fuzzer involved. The researcher is the fuzzer. After loading up the target application , the researcher simply enters inappropriate data in an attempt to crash the server or induce some undesirable behaviour. This class of fuzzing is most often applied to Web applications.

Mutation or Brute Force Testing – a fuzzer that starts with a valid sample of a protocol or data format and continually mangles every individual byte , word , dword , or string within that data packet or file. This is great early approach because it requires very little up-front research and implementing a basic brute force fuzzer is relatively straightforward.

Network Protocol Fuzzing – I would like to touch this chapter- chapter 14 regarding network protocol fuzzing that requires identifying the attack surface , mutating or generating error-inducing fuzz values , transmitting those fuzz values to a target , and monitoring that target for faults. If your fuzzer communicates with its target over some form of socket , then it is a network protocol fuzzer.

The book contains chapter about Fuzzer Methods and Fuzzer Types , Data Representation and Analysis , Requirements for Effective Fuzzing , Automation and Data Generation ,Environtment Variable and Argument Fuzzing , and so on..from Chapter 8 to 26. I strongly recommend people in Software Engineering , Malware Expert field to read this book.

p/s:- Excerpt taken from the book – Fuzzing…Brute Force Vulnerability Discovery – by Addison Wesly.

h1

CCNA Voice…

October 7, 2014

cisco_images

Just a few weeks ago , I borrow a book entitled CCNA Voice – Study Guide by Sybex. It’s pretty a good book to read , for those who interested in taking CCNA Voice. It Includes about 11 chapter consist of VoIP Voice. In Cisco Unified Communication architecture , Unified Communications Managers are what makes IP telephony possible. These hardware/software devices are the brains that handle IP call processing. The call processing portion of a Unified Communication System handles the sequence of operations from time a user pick up a phone to make a call to the time the user ends the call by hanging up. All of the signaling , dial interpretation , ringing , and call connecting is performed by the call processor. From a phone user’s standpoint , the call processor acts like a legacy based analog or digital phone. All of the basic phone functions such as dialing , ring signals , and interactions are the same as they’re always been. This is obviously by design ; because users are so familiar with using phones , it would be very difficult to modify user behavior.

Cisco Unified Communications Manager
When moving from Cisco Unified Communications Manager Business Edition to a full CCM solution, you are primarily gaining two key benefits: redundancy and scalability. The full Cisco Unified Communications Manager network solution can scale to virtually any size and allows you to implement multiple redundant servers that can support IP phones and applications should any of your primary call processing servers fail.

Applications Layer
As you move up to the next layer of the Cisco VoIP structure, you encounter the applications that expand the functionality of the voice network in some way. Many applications have already been developed for the Cisco VoIP solution, each of them adding its own special features to the voice network. Three of these application servers stand out as “essential applications” for many VoIP networks: Cisco Unity (voice mail), Interactive Voice Response (IVR)/Auto Attendant, and Unified Contact Center.

Cisco Unity Products
Cisco has designed the Cisco Unity product line to encompass everything dealing with messaging. Whereas traditional phone systems are geared to deliver messages to telephone handsets, Cisco Unity allows you to deliver messages to a variety of clients. This allows VoIP network users to unify (thus the name) all messaging into a single point of access. For
example, fax messages, voice mail, and e-mail can all be delivered to a single inbox. The Cisco Unity product line comes in three different flavors, as discussed in the following sections:
1.  Cisco Unity Express
2.  Cisco Unity Connection
3 . Cisco Unity

p/s:- taking from the excerpt – CCNA Voice – Study Guide – from Sybex.

h1

TCP Idle Scans in IPV6…

February 10, 2014

hitb_1

After discovering how to conduct the TCP Idle Scan in IPv6, 21 dif-ferent operating
systems and versions have been analyzed regarding their properties as idle host.
Among those, all nine tested Windows systems could be used as idle host. This shows
that the mistake of IPv4 to use predictable identification fields is being repeated
in IPv6. Compared to IPv4, the idle host in IPv6 is also not expected to remain idle,
but only not to send fragmented packets. To defend against this bigger threat, the
article also introduces short-term defenses for administrators as well as long term
defenses for vendors.

1. INTRODUC TION
When trying to attack a target, one of the first steps performed by an attacker will be to execute a port scan in order to discover which services are offered by the system and can be attacked. In the traditional approach for port scanning, SYNs1 are sent directly to various ports on the target to evaluate which services are running.

However, this method is easy to detect and to be traced back to the attacker. To remain undetected, different methods for port scanning exist, all providing various advantages and disadvantages [8]. One of those methods is the TCP Idle Scan. With this port scanning technique, the attacker uses the help of a third-party system, the so-called idle host, to cover his tracks. Most modern operating systems have been improved so that they cannot be used as idle host, but research has shown that the scan can still be executed by utilizing network printers [11]. At first sight, IPv6 seems immune to the idle scan technique, as the IPv6 header no longer contains the identification field. However, some IPv6 traffic still uses an identification field, namely if fragmentation is used. Studying the details of IPv6 reveals that an attacker can force fragmentation between other hosts. The attack on IPv6 is trickier than on IPv4 but has the benefit that more machines will be suited as idle hosts. This is because we only require the idle host not to create fragmented
IPv6 traffic, whereas in IPv4 the idle host is not allowed to create traffic at all.

2. Background
The TCP Idle Scan is a stealthy port scanning method, which allows an attacker to scan a target without the need of sending a single IP-Packet containing his own IP address to the target. Instead, he uses the IP address of a third host, the idle host, for the scan. To be able to retrieve the results from the idle host, the attacker utilizes the identification field in the IPv4 header (IPID)2, which is originally intended for fragmentation.

3. Conducting the TCP Idle Scan in IPv6
This section deals with the characteristics of the TCP Idle Scan in IPv6. Compared to IPv4, where most modern operating systems use protection mechanisms against the scan, it is novel to conduct the scan in IPv6. Therefore, not all operating systems use the same protection mechanisms as in IPv4. To give an overview of the behavior from various operating systems, tests have been conducted with 21 different systems, and the results are shown and discussed.

4.Behavior of various systems
As stated previously, for executing the TCP Idle Scan in IPv6 it is a necessity that the identification value is assigned by the idle host on a predictable and global basis. To determine which operating systems form appropriate idle hosts 21 different operating systems and versions have been tested to establish their method of assigning the identification value. Among all the tested systems, six assigned the identification value on a random basis and can therefore not be used as idle host. Out of the remaining 15, five assigned their
values on a per host basis which makes also those systems unusable. Another system which can not be used as idle host is OS X 10.6.7, which does not accept PTB messages with a MTU smaller than 1280 bytes. The nine systems which are left, and can be used
as idle hosts for the TCP Idle Scan in IPv6, are all Windows operating systems. System Assignment method usable Android 4.1 (Linux 3.0.15) Per host, incremental

(1) X FreeBSD 7.4 Random X FreeBSD 9.1 Random X
iOS 6.1.2 Random X
Linux 2.6.32 Per host, incremental (2) X
Linux 3.2 Per host, incremental (1) X
Linux 3.8 Per host, incremental X
OpenBSD 4.6 Random X
OpenBSD 5.2 Random X
OS X 10.6.7 Global, incremental (3) X
OS X 10.8.3 Random X
Solaris 11 Per host, incremental X
Windows Server 2003 R2 64bit, SP2 Global, incremental √
Windows Server 2008 32bit, SP1 Global, incremental √
Windows Server 2008 R2 64bit, SP1 Global, incremental by 2 √
Windows Server 2012 64bit Global, incremental by 2 (4) √
Windows XP Professional 32bit, SP3 Global, incremental (5) √
Windows Vista Business 64bit, SP1 Global, incremental √
Windows 7 Home Premium 32bit, SP1 Global, incremental by 2 √
Windows 7 Ultimate 32bit, SP1 Global, incremental by 2 √
Windows 8 Enterprise 32 bit Global, incremental by 2 (4) √
(1) Host calculates wrong TCP checksum for routes with PMTU < 1280
(2) No packets are sent on route with PMTU < 1280
(3) Does not accept Packet Too Big messages with MTU < 1280
(4) Per host offset
(5) IPv6 disabled by default
TABLE 1: List of tested systems

A special behavior occurred when testing Windows 8 and Windows Server 2012. A first analysis of the identification values sent to different hosts gives the impression that the values are assigned on a per-host-basis and start at a random initialization
value. A closer investigation though revealed that the values being assigned for one system are also incremented if messages are sent to another system. This leads to the conclusion that those operating systems use a global counter, but also a random offset for each host, which is added to the counter to create the identification value. However, the global counter is increased each time a message is sent to a host. For the TCP Idle Scan in IPv6, this means that the systems are still suitable as idle hosts, as from the view of the attacker, the identification value received from the idle host increases each time the idle host sends a message to the target. Being still usable as
idle host, it is a complete mystery to us what should be achieved with this behavior.

6. Conclusion
This paper has shown that by clever use of some IPv6 features, the TCP Idle Scan can successfully be transferred from IPv4 to IPv6. Therefore, this type of port scan remains a powerful tool in the hands of an attacker who wants to cover his tracks, and a challenge for anybody who tries to trace back the scan to its origin. The fact that major operating systems assign the identification value in
the fragmentation header in a predictable way also drastically increases the chances for an attacker to find a suitable idle host for executing the TCP Idle Scan in IPv6. Because the idle host is also not required to be completely idle, but only expected not to create IPv6 traffic using the fragmentation header, this chances are increased additionally. What remains is the question why it is still a common practice to utilize predictable identification values. The danger of predictable sequence numbers has already
been disclosed by Morris [13] in 1985. Although his article covered TCP, the vulnerabilities were caused by the same problem: a predictable assignment of the sequence number. For this reason, he advised to use random sequence numbers. With the TCP Idle Scan in IPv4 being first discovered in 1998, it has been shown that the necessity of unpredictable identification values also applies to IPv4. This article has shown that also in IPv6, predictable identification values facilitate attacks and should be substituted with random values. To prove that the TCP Idle Scan in IPv6 works in practice, a proof of concept has been created using the python program scapy5, which allows easycreation and manipulation of packets. The proof of concept can be found in the appendix. Furthermore, the security scanner Nmap6, which already provided a very elaborated version of the TCP Idle Scan in IPv4, has been extended in order to also handle the TCP Idle Scan in IPv6 [10]. Until vendors are able to provide patches for assigning unpredictable identification
values in the fragmentation header, administrators are advised to implement the short-term protection mechanisms described in Section 5. Additionally, one might consider an update of RFC 1981, which forces a host to append an empty fragmentation header to every IPv6 packet after receiving an ICMPv6 Packet Too Big message with an MTU smaller than the IPv6 minimum MTU. Likewise, updating RFC 2460 towards an obligatory random assignment of the identification value in the fragmentation header should be considered as well.

p/s:- This article is taken from the excerpt of Hack In The Box  Magazine , January 2014 – Vol 4 issue 10.

– Last Year PIKOM PC Fair was the best PC Fair that I’ve ever attend , on December  2013 last year….Great exhibiton from all the computer manufacturer…One from MSI is the best. Not to mention Kaspersky Booth….Bought a Pendrive 8GB…

– Well … this is my latest post for this year , 2014 ….Hope to see you soon…..

Follow

Get every new post delivered to your Inbox.