h1

Modern Operating Systems…

February 2, 2018

modern operating system

Got this book from National Library ( PNM ) last month. This book tells us about the operating systems concepts , threads , interprocess communication , scheduling , deadlocks , memory management , input/output , filesystems , multimedia operating systems , multi processor systems , security and linux/unix.

All the runnable software on the computer , sometimes including the operating system , is organized into number of sequential processes , or just processes for short. A process is just an executing program , including the current values of the program counter , registers , and variables. Conceptually , each processes has its own virtual CPU. In reality ,of course , the real CPU switches back and forth from process to process , but to understand the system , it is much easier to think about a collection of processes , running in pseudo parallel , than to try to keep track of how the CPU switches from program to program. This rapid switching back and forth is called multiprogramming.

To implement the process model , the operating system maintains a table ( an array of structures ) , called the process table , with one entry per process. ( some authors call these entries process control blocks ). This entry contains information about the process state , its program counter , stack pointer , memory allocation , the status of its open files , its accounting and scheduling information , and everything else about the process that must be saved when the process is switched from running to ready or blocked state so that it can be restarted later as if it had never been stopped.

Now let us consider having the kernel know about and manage the threads. No run time system is needed in each , as shown. Also , there is no thread table in each process. Instead , the kernel has a thread table that keeps track of all the threads in the system. When a thread wants to create a new thread or destroy an existing thread , it makes a kernel call , which the does the creation or destruction by updating the kernel thread table.

The kernel’s thread table holds each thread’s registers , state , and other information. The information is the same as with user-level threads ,  but it is now in    the kernel instead of in user space ( inside the run-time system). This information is a subset of the information that traditional kernels maintain about each of their single-threaded processes , that is , the process state. In addition , the kernel also maintains the traditional process table to keep track of processes.

When the semaphore’s ability to count is not needed , a simplified version of the semaphore , called a mutex , is sometimes used. Mutexes are good only for managing mutual exclusion to some shared resource or piece of code. They are easy and efficient to implement , which makes them especially useful in thread packages that are implemented entirely in user space.

A mutex is a variable that can be in one of two states : unlocked or locked. Consequently , only 1 bit is required to represent it , but in practice an integer often is used , with 0 meaning unlocked and all other values meaning locked. Two procedures are used with mutexes. When a thread (or process) needs access to a critical region , it calls mutex_lock. If the mutex is current unlocked ( meaning that the critical region is available ) , the call succeeds and the calling thread is free to enter the critical region.

The best possible page replacement algorithm is easy to describe but impossible to implement. It goes like this. A the moment that a page fault occurs , some set of pages is in memory. One of these pages will be referenced on the very next instruction ( the page containing that instruction). Other pages may not be referenced until 10 , 100 , or perhaps 1000 instructions later. Each page can be labeled with the number of instructions that will be executed before that page is first referenced.

p/s:- Some of these article is an excerpt from the book Modern Operating Systems – second edition published by Prentice Hall Inc and written by Andrew S. Tanenbaum. It’s a good book to read for students who taking the subject Operating Systems.

 

 

 

h1

Architecture , Programming and Applications of Advanced Microprocessors..

January 2, 2018

microprocessors

Happy New Year 2018…Hope this year will bring more future income and profit  for this year 2018…This new year I would like to post a review blog about microprocessors. Got this book from PNM (Perpustakaan Negara Malaysia) last year. This book tells us about the 8086 , Pentium , Pentium Pro microprocessor architecture , how does it work in a microprocessor circuit , the architecture and programming concept using assembly language of advanced microprocessor covering advanced INTEL microprocessor family starting from 8086 to Pentium Duo.

The book covers about Super Scalar Technology , the function of graphics coprocessor and video processor chips , interfacing chips are also illustrated with connection diagrams. The Intel 8086 is a 16 bit HMOS microprocessor which is implemented in n-channel silicon gate technology. It is called 16 bit microprocessor because its arithmetic logic unit , internal registers and most of its instructions are designed to operate with 16 bit binary words. It is a 40 pin IC chip constructed with 29000 transistors. It has 20 address lines out of which low order 16 lines are used as 16 bit data bus. The four high order address lines are also multiplexed. They carry four high order address bits and also four status signals.

Parallel fetching of instructions by BIU (Bus Interface Unit) and execution of instructions by EU is known as pipelining. Pipelining is achieved by using more than one functional unit to work simultaneously. While execution unit is busy to decode or execute an instruction , the bus interface unit fetches instruction bytes for next operation of the execution unit. These pre-fetched instruction bytes are stored in a first in first out (FIFO) register set in BIU. This register set is called queue. Thus instruction execution  of the current instruction , the next instruction is received by the execution unit from the queue of the BIU. Thus the EU requires negligible time to fetch the instruction from BIU and idle state of the execution unit is reduced. As a result , the function of microprocessor becomes faster. Hence pipelining process increases the processing speed of the microprocessor.

The interrupts of Intel microprocessor family include two hardware pins INTR and NML. Another hardware pin INTA is used to acknowledge the interrupt requested through INTR. The microprocessor also includes software interrupts INT , INTO , INT3 and BOUND. Interrupt flag (IF) and trap flag (TF) are related to the interrupt structure. There are three sources of interrupts in 8086:-

1- Hardware interrupt : An interrupt caused by application of external signal to either non – maskable interrupt ( NMI ) input pin or maskable interrupt INTR input pin is called harware interrupt.

2- Software interrupt : Execution of an interrupt instruction by INT instruction is known as software interrupt.

3 – Interrupt is also caused due to error condition produced in the 8086 by the execution of an instruction e.g. Divide by zero interrupt is such type of interrupt.

Lastly , its a quite a good book to read for those who interested in knowing about microprocessor architecture..Lots of assembly language coding here thats programmed  the microprocessor , registers , buffers , IC , controller and so much more…Some of the article above is an excerpt from the book  Architecture , Programming and Applications of Advanced Microprocessors – Second Edition written by A.K. Ganguly and published by Alpha Science – copyright 2012.

 

h1

Malaysia IT Fair 2017 – Midvalley Exhibition Centre – Megamall….

December 17, 2017

 

 

 

 

Malaysian IT Fair 2017 is held at Midvalley Exhibition Centre – Megamall starting from 15/12/2017 till 17/12/2017. I went to attend the Malaysian IT Fair 2017 yesterday at Midvalley Megamall. The journey from my house took about 45 minutes. Got stranded in the highway trying to reach Midvalley, and at last reach the parking lot of Midvalley. The first booth that I enter is the Kaspersky booth…They presented me with the latest Kaspersky Anti Virus 2018 , Kaspersky Internet Security and Kaspersky Small Office Security. Got a promotion price if you buy Kaspersky Anti Virus product at Malaysian IT Fair 2017.

Then , I went into the Exhibition Centre and the first thing I saw I was introduced with a gaming rig – HP Omen with AMD Ryzen 7 Processor with 16GB RAM and 2 TB Hard Disk Space…I play Dirt 4 and I was quite amazed with it’s  performance , the processor speed is really fast , no lag when playing Dirt 4 , and it’s really a fast computer. It uses AMD Ryzen 7 1800X 8 Core Processor 3.6GHz/4.0 Ghz processor speed. The graphics that it uses was AMD Radeon RX580 ( 4GB GDDR5 Graphics Memory).

 

Bitcoin Mining using GPU’s …

 

 

 

 

 

 

 

 

Then , after a walk through some several booth , I encounter a Bitcoin Server at Jayacom Booth..It was a DIY Bitcoin Mining System. The sales assistant explain me about ethereum package 02 – A bitcoin server that uses Intel G4400 3.3Ghz Processor , Biostar socket 1151 TB250-BTC , use memory of G.Skill 8GB Aegis Series DDR4 2400MHz , and a Galaxy Gamer SSD L 120 GB 2.5″ SSD. It uses Bitcoin 1600W Power Supply. The price is RM11,729.00 .  Compare to ethereum package 03 , the price is RM12,229.00. For Bitcoin server , he suggest me to buy Antminer S9 , a power supply for the bitcoin server that uses 11.60 – 13.0V that price RM16,900.00.

Some booth at Malaysian IT Fair 2017

At MSI Booth…Gaming Rig plus a MSI’s LCD Monitor

 

 

 

 

 

 

 

 

 

A quite interesting gaming rig.

At Logitech booth…

  

Me at Kaspersky Booth…The event is co-sponsored by Kaspersky…

Quit a cool PC from Illegear…

Me at Kaspersky booth…Grab your Kaspersky Anti Virus 2017 , Kaspersky Internet Security for PC and mobile devices and Kaspersky Small Office Security for just a promotion price…end till today….Stand to win prizes for buying Kaspersky products and you can get a redemption for it…You also can get many computer accessories , IT networking products , external hard drive for just a low price and many other IT gadgets…Company that has it’s booth here were Lenovo , HP , MSI , Asus ROG , Logitech , Microsoft, Western Digital and lots more….

Lastly , hope to see Malaysian IT Fair for next year 2018 , hope next year will bring more IT  company to produce their product and services better. This year is quite smooth and the price for PC gaming , notebook and IT gadget is quite reasonable to buy for a low price. Till then , happy gaming….!

 

 

 

h1

Illustrated C# 2008 …..

November 29, 2017

 

 

 

 

 

 

 

 

 

 

 

 

 

Got this book from National Library (PNM) on 11/11/2017….The book tell us about how to program in C# language , you’ll have a thorough working knowledge of all aspects of the C# language , whether you’re a novice programmer or a seasoned veteran of other languages. The chapter covers in this book is about types , storage and variables , classes , methods , classes and inheritance , expressions and operators , statements , namespaces and assemblies , exceptions , structs and many more…Illustrations alone , however , are not sufficient to explain a programming language  and platform. The goal of this book is to find the best combination of words and illustrations to give you a thorough understanding of the language , and to allow the book to serve as a reference resource as well.

The compiler for a .NET language takes a source code file and produces an output file called an assembly. An assembly is either an executable or a DLL. The process is illustrated here….The code in an assembly is not native machine code , but an intermediate language called the Common Intermediate Language ( CIL ) . An assembly , among other things , contains the following items :

1. The program’s CIL

2. Metadata about types used in the program.

3. Metadata about references to other assemblies.

The acronym for the intermediate language has changed over time , and different references use different terms. Two other terms for the CIL that you might encounter are IL ( Intermediate Language ) and MSIL ( Microsoft Intermediate Language ) , which was used during initial development and early documentation.

Some types , such as short , int , and long , are called simple types , and can only store a single data item. Other types can store multiple data items. An array, for example , is a type that can store multiple items of the same type. The individual items are called elements , and are referenced by a number , called an index. Other types , however , can contain data items of many different types. The individual elements in these types are called members , and , unlike arrays , in which each member is referenced to by a number , these members have distinct names. There are two types of members: data members and function members.

1. Data members store data that is relevant to the object of the class or the class itself.

2. Function members execute code. Function members define how the type can act.

A method is a named block of executable code that can be executed from many different parts of the program , and even from other programs. ( There are also anonymous methods , which aren’t named ). When a method is called , or invoked , it executes its code, and then returns to the code that called it. Some methods return a value to the position from which they were called. Methods correspond to member functions in C++. The minimum syntax for declaring a method includes the following components :

1. Return type : This states the type of value the method returns. If a method does not return a value , the return type is specified as void.

2. Name : This is the name of the method.

3. Parameter list: This consist of at least an empty set of matching parentheses. If they are parameters , they are listed between the parentheses.

4. Method body : This consists of a matching set of curly braces , containing the executable code.

Some of these articles are taken from the excerpt from the book Illustrated C# 2008 written by Daniel Solis and published by APress.

 

 

 

Database Systems.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Received and read this book on 11/11/2017 and got it from National Library (PNM). The book discussed about database system and its usage in our daily activities . The important element in discussing database is to explain the data. Data and information are the basic elements contributing to the process of decision-making. As today’s society relies so much on information , data will be utilized and generated at all time. Before taking a step further , it is good to differentiate the terms data and information although occasionally both are used to refer to a similar item.

In the client-server architecture , database and DBMS are stored in a computer known as a server. Server computers usually possess higher processing capability which acts as the back end and connects to client computer. The client computer acts as the front end in a local area network as depicted. This design is able to reduce costs as we can use work stations or personal computers as a client and server. Besides sharing the database , client-server architecture allows sharing of other resources such as printers , scanners , data storage equipment and others. Request for the database usage will be made by the client while the server will provide database management and communication services. Client-server architecture are suitable for small and medium working groups such as library database system , student fee payment system , and a supermarket’s sales and inventory system.

Two main objectives in creating database are to achieve high level of data independence and high data abstraction. Specifically , data independence means changes in storage structure and data access techniques does not affect application program. This condition can be achieved because database does not only keep user data , but also data dictionary that includes information on structure of data in the database. It means information about data organisation and its access techniques need not be coded in application programs as being done in file processing system.

Data independence is important to database system because of two main factors:

1. The need of different users views for a set of same data. If data independence does not exist , we cannot provide different views             from the same data.

2. Very dynamic database. Data and application program expand with the increase of user requirements. Just imagine if there is no            data independence , programmers will spend so much time to write and change program codes when additional data requirements          or change to storage structure occurs.

Data abstraction can be done as a result of data independence. Thus , users are not burdened with the database physical structure , but only need to focus on the abstract data views that they required. It also suitable to the different user views for different categories of users with the database shared by many. Data abstraction is supported by three-level architecture introduced by ANSI-SPARC in 1975 that has become fundamental to the architecture of several database systems today.

Internal level describes the data structure and the file organisations that enables data being physically stored in storage devices in the database. Internal level is interfaced with operating system access method in order to establish indexes , and data storage mechanisms. It shows that under internal level , there is an existence of physical level that is being controlled by operating system with DBMSs make full use of its operating system access method facilities , and other DBMSs use their own file organisation.

Internal schema written in DLL stated the metadata of internal level. It has information such as :

1. Data structure used

2. Data representation

3. Records sequence

4. Space and storage allocation for data and indexes.

Finally this book is a good book to read in the subject database systems. This book was written in the spirit of helping the students to learn the subject of database management system in a guided manner based on real life examples. Some of the articles are taken from the excerpt of the book Database System written by Prof Dr. Abdullah Embong printed in 2010 and published by University Malaysia Pahang.

 

 

 

 

h1

SQL and Relational Theory….How to write accurate SQL code…..

October 11, 2017

 

 

 

 

 

 

 

 

 

Got this book from National Library last month…It introduces us about SQL and Relational Theory …where it origins and the terms used in SQL statement. And another point on terminology: Having said that SQL tries to simplify one set of terms , I must say too that it does its best to complicate another. I refer to its use of terms operator , function , procedure , routine , and method , all of which denote essentially the same thing ( with perhaps very minor differences ). In this book I’ll use the term operator throughout ; thus , for example , I’ll refer to “=” (equality comparison ) , “:=” (assignment) , “+” ( addition ) , DISTINCT , JOIN , SUM , GROUP BY ( etc , ) all as operators specifically.

The point about principles is : They endure. By contrast , products and technologies ( and the SQL language , come to that ) change all the time- but principles don’t. For example , suppose you know Oracle ; in fact , suppose you’re an expert on Oracle. But if Oracle is all you know , then your knowledge is not necessarily transferable to , say , a DB2 or SQL Server  environtment ( it might even make it harder to make progress in that new environtment). But if you know the underlying principles – in other words , if you know the relational model – then you have knowledge and skills that will be transferable: knowledge and skills that you’ll be able to apply in every environtment and will never be obsolute.

An integrity constraints ( constraint for short) is basically just a boolean expression that must evaluate to TRUE. In the case of departments and employees , for example , we might have a constraint to the effect that SALARY values must be greater than zero. Now , any given database and will be subject to numerous constraints ; however, all of those constraints will necessarily be specific to that database and will thus be expressed in terms of the relations in that database. By contrast , the relational model as originally formulated includes two generic constraints – generic , in the sense that they apply to every database , loosely speaking. One has to do with primary keys and the other with foreign keys. Here they are:

1- The entity integrity rule: Primary key attributes don’t permit nulls

2- The referential integrity rule: There musn’t be any unmatched foreign key values.

The logical differences between relations and relvars is actually a special case of the logical difference between values and variables in general , and I’d like to take a few moments to look at more general case. Here then are some definitions:

Definition : A value is what the logicians call an “individual constant,” such as the integer 3. A value has no location in time or space. However , values can be represented in memory by means of some encoding , and those representations or encoding do have location in time and space. Indeed , distinct representations of the same value can appear at any number of distinct locations in time and space – meaning , loosely , that any number of different variables ( see the next definition) can have the same value , at the same time or different times. Observe in particular that , by definition , a value can’t be updated ; for if it could , then after such an update it would’nt be that value any longer.

Definition: A variable is a holder for a representation of a value. A variable does have location in time and space. Also , variables , unlike values , can be updated ; that is , the current value of the variable  can be replaced by another value . ( After all , that’s what “variable” means – to be variable is to be updatable and to be updatable is to be a variable; equivalently, to be a variable is to be assignable to , to be assignable to is to be a variable ).

 

In conclusion , this second edition includes new material on recursive queries, “missing information” without nulls , new update operators , and topics such as aggregate operators, grouping and ungrouping , and view updating. If you have a modest-to-advanced background in SQL , you’ll learn how to deal with a host of common SQL dilemmas. Some of the excerpts above is taken from the book SQL and Relational Theory – How to write accurate SQL code – second edition written by C.J. Date and published by O’Reilly Media , Inc – 2011.

Beginning ASP.NET Security.

 

 

 

 

 

 

 

 

 

 

 

Previously , got this book from National Library (PNM). It tells about the uniqueness of ASP.NET security , how web pages are design according to the security considerations , using the ASP language and web pages , how to use form and fields in a web page , user authentication using log in name and password , and so much more. When debugging web applications , or trying to understand the underlying mechanisms an application uses , it is often useful to capture HTTP requests and responses. This section introduces you to one such useful debugging tool , Fiddler , and how you can use it to hand craft HTTP requests. Like a lot of tools with legitimate  uses , tools such as Fiddler can be used by an attacker to send fake requests to a website in attempt to compromise it.

The mitigation technique for XSS is as follows : you , the developer , must examine and constrain all input (be it from the user, a database, an XML file, or other source) and encode it for output. Even with request validation , it is your responsibility to encode all output before writing it to a page.

Encoding output consists of taking an input string , examining each character in the string , and converting the characters from one format to another format. For example , taking the string <hello> and encoding it in a form suitable for HTML output (HTML encoding) would consist of replacing the < with &lt ; and the >with&gt; , resulting in a safe output of &lt;hello&gt;.

The Anti-XSS library also includes the Security Run-time Engine (SRE) , and HTTP Module , which protects your ASP.NET application by using the Anti-XSS library to automatically and proactively encode data. It works by analyzing your web application  and inspecting each ASP.NET web controls, or controls derived from them. The module can be configured via the antixssmodule.config to specify which encoding is applied to a control’s property.

All ASP.NET validation controls are normal ASP.NET controls that also implement the IValidator interface , as shown here:

public interface IValidator

{

void Validate();

string ErrorMessage { get; set ; }

bool IsValid { get; set; }

}

As you can see , the IValidator interface defines two properties (ErrorMessage and IsValid) and a single method (Validate). When a validation control is placed on a page , it adds itselft to the page’s Validators collection. The Page class provides a Validate method in each control performs whatever validation logic has been written , and then sets the IsValid and ErrorMessage that attaches the validation to the input control you wish to validate.

ASP.NET controls that trigger a postback have a CausesValidation property. When set to true , a postback will cause the page’s Validate method to be called before any of the control’s event handlers run. Some controls ( such as Button ) will have a default CausesValidation value of true ; others ( generally those that do not automatically trigger a postback) do not.

When you were testing the CSRF protection module you wrote , you may have tested it on a page that raises postbacks. You may have noticed another hidden form field , _EVENTVALIDATION. A common interface design for web applications is to show or hide various parts of a web page based on who a user is , and what that user can do. For example , users in an administrative role may see extra buttons and text on a page ( such as “Delete comment” or “Modify price”).

This is generally implemented by including every possible control on a page , and hiding or disabling them at run-time as the page loads using the role membership provider that ASP.NET provides , as shown here:

if (!User.IsInRole(“siteadmin”))

adminPanel.Visible = false;

When a control is hidden , the HTML it would generate is no longer included in the HTML output for a page. When a control is disabled , then , typically , the HTML-enabled attribute is set to false when the control’s HTML is rendered.

Lastly , this book explores issues with user input including validation , cross-site scripting (XSS) and cross-site request forgery (CSRF) , examines methods for authenticating and authorizing users , including ASP.NET membership providers and preventing cookie theft. The book also present security with the Microsoft ASP.NET Ajax framework and Silverlight and includes an overview of security with the Microsoft MVC framework. Some of the article above is an excerpt from the book Beginning ASP.NET Security written by Barry Dorrans and published by John Wiley & Sons – 2010.

 

 

 

h1

Computer Networking…A Top Down Approach….

September 19, 2017

computer networking

Previously , got this book from National Library this month…It’s a great book to read , especially people who take computer networking as their major or specialization… The book gives us the introduction and fundamentals of computer networking , how networking can be implemented in today’s business and life and the TCP/IP stack that can be implemented in today’s network design environment.

In a network application, end systems exchange messages with each other. Messages can contain anything the application designer wants. Messages may perform a control function (for example, the “Hi” messages in our handshaking example in Figure 1.2) or can contain data, such as an email message, a JPEG image, or an MP3 audio file. To send a message from a source end system to a destination end system, the source breaks long messages into smaller chunks of data known as packets. Between source and destination, each packet travels through communication links and packet switches (for which there are two predominant types, routers and linklayer switches). Packets are transmitted over each communication link at a rate equal to the full transmission rate of the link. So, if a source end system or a packet switch is sending a packet of L bits over a link with transmission rate R bits/sec, then the time to transmit the packet is L/R seconds.

In circuit-switched networks, the resources needed along a path (buffers, link
transmission rate) to provide for communication between the end systems are
reserved for the duration of the communication session between the end systems. In packet-switched networks, these resources are not reserved; a session’s messages use the resources on demand, and as a consequence, may have to wait (that is, queue) for access to a communication link. As a simple analogy, consider two restaurants, one that requires reservations and another that neither requires reservations nor accepts them. For the restaurant that requires reservations, we have to go through the hassle of calling before we leave home. But when we arrive at the restaurant we can, in principle, immediately be seated and order our meal. For the restaurant that does not require reservations, we don’t need to bother to reserve a table. But when we arrive at the restaurant, we may have to wait for a table before we can be seated.

A circuit in a link is implemented with either frequency-division multiplexing
(FDM) or time-division multiplexing (TDM). With FDM, the frequency spectrum
of a link is divided up among the connections established across the link.

Specifically, the link dedicates a frequency band to each connection for the
duration of the connection. In telephone networks, this frequency band typically has a width of 4 kHz (that is, 4,000 hertz or 4,000 cycles per second). The width of the band is called, not surprisingly, the bandwidth. FM radio stations also use FDM to share the frequency spectrum between 88 MHz and 108 MHz, with each station being allocated a specific frequency band. For a TDM link, time is divided into frames of fixed duration, and each frame is divided into a fixed number of time slots. When the network establishes a connection across a link, the network dedicates one time slot in every frame to this connection. These slots are dedicated for the sole use of that connection, with one time slot available for use (in every frame) to transmit the connection’s data.

The most complicated and interesting component of nodal delay is the queuing
delay, dqueue. In fact, queuing delay is so important and interesting in computer networking that thousands of papers and numerous books have been written about it [Bertsekas 1991; Daigle 1991; Kleinrock 1975, 1976; Ross 1995]. We give only a high-lev el, intuitive discussion of queuing delay here; the more curious reader may want to browse through some of the books (or even eventually write a PhD thesis on the subject!). Unlike the other three delays (namely, dproc, dtrans, and dprop), the queuing delay can vary from packet to packet. For example, if 10 packets arrive at an empty queue at the same time, the first packet transmitted will suffer no queuing delay, while the last packet transmitted will suffer a relatively large queuing delay (while it waits for the other nine packets to be transmitted). Therefore, when characterizing queuing delay, one typically uses statistical measures, such as average queuing delay, variance of queuing delay, and the probability that the queuing delay exceeds some specified value.

In Conclusion , this book is good to read and the chapters covered network security , wireless network , network management , multimedia networking , the link layer and the network layer. Some of the article above is an excerpt of the book Computer Networking – A Top Down Approach – Fifth Edition written by Kuros and Ross publish by Pearson.

 

Database System – A Practical Approach to Design , Implementation and Management 

database system

Hi there …back again. This time I want to present this book – Database Systems – A Practical Approach to Design , Implementation and Management. This book is a great book to read , gives us the understanding and introduction to database systems , the history of databases , SQL coding and implementation in SQL server , how to use microsoft access in implementing database using rows ,columns and tables , procedures to write SQL statements , how to add or delete a row , column  or tables ,  how to use DBMS and RDBMS , data warehousing and data mining  and lots more…

The overall description of the database is called the database schema. There are three different types of schema in the database and these are defined according to the levels of abstraction of the three-level architecture illustrated in Figure 2.1. At the highest level, we have multiple external schemas (also called subschemas) that correspond to different views of the data. At the conceptual level, we have the conceptual schema, which describes all the entities, attributes, and relationships together with integrity constraints. At the lowest level of abstraction we have the internal schema, which is a complete description of the internal model, containing the definitions of stored records, the methods of representation, the data fields, and the indexes and storage structures used. There is only one conceptual schema and one internal schema per database.

A major objective for the three-level architecture is to provide data independence, which means that upper levels are unaffected by changes to lower levels. There are two kinds of data independence: logical and physical.
Figure 2.2 Differences between the three levels. Changes to the conceptual schema, such as the addition or removal of new entities, attributes, or relationships, should be possible without having to change existing external schemas or having to rewrite application programs. Clearly, the users for
whom the changes have been made need to be aware of them, but what is important is that other users should not be.

Therefore, one of the main functions of the DBMS is to support a Data
Manipulation Language in which the user can construct statements that will cause such data manipulation to occur. Data manipulation applies to the external, conceptual, and internal levels. However, at the internal level we must define rather complex low-level procedures that allow efficient data access. In contrast, at higher levels, emphasis is placed on ease of use and effort is directed at providing efficient user interaction with the system.

A model is a representation of real-world objects and events, and their associations. It is an abstraction that concentrates on the essential, inherent aspects of an organization and ignores the accidental properties. A data model represents the organization itself. It should provide the basic concepts and notations that will allow database designers and end-users to communicate unambiguously and accurately their understanding of the organizational data. A data model can be thought of as comprising three components:
(1) a structural part, consisting of a set of rules according to which databases can be constructed;
(2) a manipulative part, defining the types of operation that are allowed on the data (this includes the operations that are used for updating or retrieving data from the database and for changing the structure of the database);
(3) a set of integrity constraints, which ensures that the data is accurate.

In a file-server environment, the processing is distributed about the network, typically a local area network (LAN). The file-server holds the files required by the applications and the DBMS. However, the applications and the DBMS run on each workstation, requesting files from the file-server when necessary, as illustrated in Figure 3.2. In this way, the file-server acts simply as a shared hard disk drive. The DBMS on each workstation sends requests to the file-server for all data that the DBMS requires that is stored on disk. This approach can generate a significant amount of network traffic, which can lead to performance problems. For example, consider a user request that requires the names of staff who work in the branch at 163 Main St. We can express this request in SQL (see Chapter 6) as:

SELECT fName, IName
FROM Branch b, Staff s
WHERE b.branchNo = s.branchNo AND b.street = ‘163 Main St’;

As the file-server has no knowledge of SQL, the DBMS must request the files corresponding to the Branch and Staff relations from the file-server, rather than just the staff names that satisfy the query.
The file-server architecture, therefore, has three main disadvantages:
(1) There is a large amount of network traffic.
(2) A full copy of the DBMS is required on each workstation.
(3) Concurrency, recovery, and integrity control are more complex, because there
can be multiple DBMSs accessing the same files.

As stated earlier, there are no duplicate tuples within a relation. Therefore, we need to be able to identify one or more attributes (called relational keys) that uniquely identifies each tuple in a relation. In this section, we explain the terminology used for relational keys. Superkey An attribute, or set of attributes, that uniquely identifies a tuple within a relation. A superkey uniquely identifies each tuple within a relation. However, a superkey may contain additional attributes that are not necessary for unique identification, and we are interested in identifying superkeys that contain only the minimum number of attributes necessary for unique identification. Candidate key – A superkey such that no proper subset is a superkey within the relation.
A candidate key K for a relation R has two properties:
• Uniqueness. In each tuple of R, the values of K uniquely identify that tuple.
• Irreducibility. No proper subset of K has the uniqueness property.

Lastly , this books elaborate and explain in details about database manipulation , database architecture , and object DBMS. The content of this book is quite large to read , explore and understand , but it’s quite a good reference for a database systems book. Some of the article is an excerpt from the book Database Systems – A Practical Approach to Design , Implementation and Management – written by Thomas Connoly and Carolyn Begg.

 

h1

Java – How To Program…

January 21, 2017

Java- How to Program.   

java-how-to-program

This book is a great book to read. It teaches us how to do programming in Java , plus some code and exercises that can help you sharpen your java skills in doing java programming language. From array to object oriented programming , all the fundamental stuff of java programming language are explained in this book.

This chapter began our introduction to data structures, exploring the use of arrays to store data in and retrieve data from lists and tables of values. The chapter examples demonstrated how to declare an array, initialize an array and refer to individual elements of an array. The chapter introduced the enhanced for statement to iterate through arrays. We also illustrated how to pass arrays to methods and how to declare and manipulate multidimensional arrays. Finally, the chapter showed how to write methods that use variable-length argument lists and how to read arguments passed to a program from the command line.

In our discussions of object-oriented programs in the preceding chapters, we introduced many basic concepts and terminology that relate to Java object-oriented programming (OOP). We also discussed our program development methodology: We selected appropriate variables and methods for each program and specified the manner in which an object of our class collaborated with objects of Java API classes to accomplish the program’s overall goals.In this chapter, we take a deeper look at building classes, controlling access to members of a class and creating constructors. We discuss composition—a capability that allows a class to have references to objects of other classes as members. We reexamine the use of set and get methods and further explore the class type enum (introduced in Section 6.10) that enables programmers to declare and manipulate sets of unique identifiers that represent constant values. In Section 6.10, we introduced the basic enum type, which appeared within another class and simply declared a set of constants. In this chapter, we discuss the relationship between enum types and classes, demonstrating that an enum, like a class, can be declared in its own file with constructors, methods and fields. The chapter also discusses static class members and final instance variables in detail. We investigate issues such as software reusability, data abstraction and encapsulation. Finally, we explain how to orga-nize classes in packages to help manage large applications and promote reuse, then show a special relationship between classes in the same package. Chapter 9, Object-Oriented Programming: Inheritance, and Chapter 10, Object-Oriented Programming: Polymorphism, introduce two additional key object-oriented programming technologies.

Every object can access a reference to itself with keyword this (sometimes called the this reference). When a non-static method is called for a particular object, the method’s body implicitly uses keyword this to refer to the object’s instance variables and other methods. As you will see in Fig. 8.4, you can also use keyword this explicitly in a non-static method’s body. Section 8.5 shows another interesting use of keyword this. Section 8.11 explains why keyword this cannot be used in a static method.

As you know, you can declare your own constructor to specify how objects of a class should be initialized. Next, we demonstrate a class with several overloaded constructors that enable objects of that class to be initialized in different ways. To overload constructors, simply provide multiple constructor declarations with different signatures. Recall from Section 6.12 that the compiler differentiates signatures by the number of parameters, the types of the parameters and the order of the parameter types in each signature.

Every class must have at least one constructor. Recall from Section 3.7, that if you do not provide any constructors in a class’s declaration, the compiler creates a default constructor that takes no arguments when it is invoked. The default constructor initializes the instance variables to the initial values specified in their declarations or to their default values (zero for primitive numeric types, false for boolean values and null for references). In Section 9.4.1, you’ll learn that the default constructor performs another task in addition
to initializing each instance variable to its default value.

As you know, a class’s private fields can be manipulated only by methods of that class. A typical manipulation might be the adjustment of a customer’s bank balance (e.g., a private instance variable of a class BankAccount) by a method computeInterest. Classes often provide public methods to allow clients of the class to set (i.e., assign values to) or get (i.e., obtain the values of) private instance variables. As a naming example, a method that sets instance variable interestRate would typically be named setInterestRate and a method that gets the interestRate would typically be called getInterestRate. Set methods are also commonly called mutator methods, because they typically change a value. Get methods are also commonly called accessor methods or query methods.

Every object has its own copy of all the instance variables of the class. In certain cases, only one copy of a particular variable should be shared by all objects of a class. A static field—called a class variable—is used in such cases. A static variable represents classwide information—all objects of the class share the same piece of data. The declaration of a static variable begins with the keyword static. Let’s motivate static data with an example. Suppose that we have a video game with Martians and other space creatures. Each Martian tends to be brave and willing to attack other space creatures when the Martian is aware that at least four other Martians are present. If fewer than five Martians are present, each of them becomes cowardly. Thus each Martian needs to know the martianCount. We could endow class Martian with martianCount as an instance variable. If we do this, then every Martian will have a separate copy of the instance variable, and every time we create a new Martian, we’ll have to update the instance variable martianCount in every Martian. This wastes space with the redundant
copies, wastes time in updating the separate copies and is error prone. Instead, we
declare martianCount to be static, making martianCount classwide data. Every Martian
can see the martianCount as if it were an instance variable of class Martian, but only one
copy of the static martianCount is maintained. This saves space. We save time by having the Martian constructor increment the static martianCount—there is only one copy, so we do not have to increment separate copies of martianCount for each Martian object.

As we explained in the preceding section, instantiating a subclass object begins a chain of constructor calls in which the subclass constructor, before performing its own tasks, invokes its direct superclass’s constructor either explicitly (via the super reference) or implicitly (calling the superclass’s default constructor or no-argument constructor). Similarly, if the superclass is derived from another class (as is, of course, every class except Object), the superclass constructor invokes the constructor of the next class up in the hierarchy, and so on. The last constructor called in the chain is always the constructor for class Object. The original subclass constructor’s body finishes executing last. Each superclass’s constructor manipulates the superclass instance variables that the subclass object inherits. For example, consider again the CommissionEmployee3– sePlusCommissionEmployee4 hierarchy from Fig. 9.12 and Fig. 9.13. When a program creates a BasePlusCommissionEmployee4 object, the BasePlus- CommissionEmployee4 constructor is called. That constructor calls CommissionEmployee3’s constructor, which in turn calls Object’s constructor. Class Object’s constructor has an empty body, so it immediately returns control to CommissionEmployee3’s constructor, which then initializes the private instance variables of CommissionEmployee3 that are part
of the BasePlusCommissionEmployee4 object. When CommissionEmployee3’s constructor
completes execution, it returns control to BasePlusCommissionEmployee4’s constructor,
which initializes the BasePlusCommissionEmployee4 object’s baseSalary.

p/s:- Some of this article are excerpt from the book – Java – How To Program – Seventh Edition written by P.J. Deitel and H.M. Deitel and published by Pearson Education in 2007.

h1

Microsoft SQL Server 2000 – Database Design and Implementation…

November 15, 2016

microsoft-sql-server-2000

Just got this book from PNM ( Perpustakaan Negara Malaysia ) – National Library. It’s quite an interesting book to read about using and managing SQL Server 2000. The book tells us how to do database design and how to implement it using Microsoft SQL Server 2000 Enterprise Management.

System Integration- SQL Server 2000 works with other products to form a stable and secure data store for Internet and intranet systems:
■ SQL Server 2000 works with Windows 2000 Server and Windows NT Server security and encryption facilities to implement secure data storage.
■ SQL Server 2000 forms a high-performance data storage service for Web applications running under Microsoft Internet Information Services.
■ SQL Server 2000 can be used with Site Server to build and maintain large, sophisticated e-commerce Web sites.
■ The SQL Server 2000 TCP/IP Sockets communications support can be integrated with Microsoft Proxy Server to implement secure Internet and intranet communications.
SQL Server 2000 is scalable to levels of performance capable of handling extremely large Internet sites. In addition, the SQL Server 2000 database engine includes native support for XML, and the Web Assistant Wizard helps you to generate Hypertext Markup Language (HTML) pages from SQL Server 2000 data and to post SQL Server 2000 data to Hypertext Transport Protocol (HTTP) and File Transfer Protocol (FTP) locations.

SQL Server 2000 is an RDBMS that is made up of a number of components. The  database engine is a modern, highly scalable engine that stores data in tables. SQL Server 2000 replication helps sites to maintain multiple copies of data on different computers in order to improve overall system performance while making sure that the different copies of data are kept synchronized. DTS helps you to build data warehouses and data marts in SQL Server by importing and transferring data from multiple heterogeneous sources interactively or automatically on a regularly scheduled basis. Analysis Services provides tools for analyzing the data stored in data warehouses and data marts. SQL Server 2000 English Query helps you to build applications that can customize themselves to ad hoc user questions. SQL Server 2000 Meta Data Services provides a way to store and manage metadata relating to information systems and applications. SQL Server Books Online is the online documentation provided with SQL Server 2000. SQL Server 2000 includes many
graphical and command-prompt utilities that help users, programmers,and administrators perform a variety of tasks

Table and Index Architecture – SQL Server 2000 supports indexes on views. The first index allowed on a view is a clustered index. At the time a CREATE INDEX statement is executed on a view, the result set for the view materializes and is stored in the database with the same structure as a table that has a clustered index. The data rows for each table or indexed view are stored in a collection of 8 KB data pages. Each data page has a 96-byte header containing system information, such as the identifier of the table that owns the page. The page header also includes pointers to the next and previous pages that are used if the pages are linked in a list. A row offset table is at the end of the page. Data rows fill the rest of the page, as shown in Figure 1.5.
Organization of data pages. SQL Server 2000 tables use one of two methods to organize their data pages—clustered tables and heaps:
■ Clustered tables. Clustered tables are tables that have a clustered index. The data rows are stored in order based on the clustered index key. The index is implemented as a B-tree structure that supports the fast retrieval of the rows based on their clustered index key values. The pages in each level of the index, including the data pages in the leaf level, are linked in a doubly linked list, but navigation from one level to another is done using key values.
■ Heaps. Heaps are tables that have no clustered index. The data rows are not stored in any particular order, and there is no particular order to the sequence of the data pages. The data pages are not linked in a linked list. Indexed views have the same storage structure as clustered tables.

SQL Server also supports up to 249 non-clustered indexes on each table or indexed view. The non-clustered indexes also have a B-tree structure but utilize it differently than clustered indexes. The difference is that non-clustered indexes have no effect on the order of the data rows. Clustered tables and indexed views keep their data rows in order based on the clustered index key. The collection of data pages for a heap is not affected if non-clustered indexes are defined for the table. The data pages remain in a heap unless a clustered index is defined.

Transact-SQL Debugger Window- SQL Query Analyzer comes equipped with a Transact-SQL debugger that enables you to control and monitor the execution of stored procedures. The debugger supports traditional functions, such as setting breakpoints, defining watch expressions, and single-stepping through procedures. The Transact-SQL debugger in SQL Query Analyzer supports debugging against SQL Server 2000, SQL Server 7.0, and SQL Server 6.5 Service Pack 2.

You can run Transact-SQL Debugger only from within SQL Query Analyzer.  started, the debugging interface occupies a window within that application, as shown . Transact-SQL Debugger window showing the result of debugging the Cust-OrderHist stored procedure in the Northwind database. When the Transact-SQL Debugger starts, a dialog box appears prompting you to set the values of input parameter variables. It is not mandatory for these values to be set at this time. You will have the opportunity to make modifications once the Transact-SQL Debugger window appears. In the dialog box, click Execute to continue with your session.

Due to connection constraints, it is not possible to create a new query while the debugger window is in the foreground. To create a new query, either bring an existing
query window to the foreground or open a new connection to the database. The Transact-SQL Debugger window consists of a toolbar, a status bar, and a series of window panes. Many of these components have dual purposes, serving as both control and monitoring mechanisms.

Only limited functionality might be available from some of these components after
a procedure has been completed or aborted. For example, you cannot set breakpoints
or scroll between entries in either of the variable windows when the procedure
is not running.

p/s:-  Some of the articles above is taken from the excerpt – Microsoft SQL Server 2000 – Database Design and Implementation. The book is written and published by Microsoft Press year 2003.

Practical Reporting with Ruby and Rails.

practical-reporting-with-ruby-and-rails

Finally, got this book from National Library – PNM ( Perpustakaan Negara Malaysia ) about Ruby and Rails. Most of the examples in this book use Active Record as a database access library. Active Record is a simple way to access databases and database tables in Ruby. It is a powerful object-relational mapping (ORM) library that lets you easily model databases using an object-oriented interface. Besides being a stand-alone ORM package for Ruby, Active Record will also be familiar to web application developers as the model part of the web application framework Ruby on Rails (see http://ar.rubyonrails.org/).
Active Record has a number of advantages over traditional ORM packages. Like the rest of the Rails stack, it emphasizes configuration by convention. This means that Active Record assumes that your tables and fields follow certain conventions unless you explicitly tell it otherwise. For example, it assumes that all tables have an artificial primary key named id (if you have a different primary key, you can override it, of course). It also assumes that the name of each table is a pluralized version of the model (that is, class) name; so if you have a model named Item, it assumes that your database table will be named items.

Active Record lets you define one or more models, each of which represents a single
database table. Class instances are represented by rows in the appropriate database table. The fields of the tables, which will become your object’s attributes, are automatically read from the database, so unlike other ORM libraries, you won’t need to repeat your schema in two places or tinker with XML files to dictate the mapping. However, the relationships between models in Active Record aren’t automatically read from the database, so you’ll need to place code that represents those relationships in your models.Creating a model in Active Record gives you quite a few features for free. You can automatically add, delete, find, and update records using methods, and those methods can make simple data tasks very trivial.

Grouping refers to a way to reduce a table into a subset, where each row in the subset
represents the set of records having a particular grouped value or values. For example, if
you were tracking automobile accidents, and you had a table of persons, with their age and number of accidents, you could group by age and retrieve every distinct age in he database. In other words, you would get a list of the age of every person, with the duplicates removed.

If you were using an Active Record model named Person with an age column, you
could find all of the distinct ages of the people involved, as follows:
ages = Person.find(:all, :group=>’age’).

However, to perform useful work on grouped queries, you’ll typically use aggregate functions. For example, you’ll need to use aggregate functions to retrieve the average accidents per age group or the count of the people in each age group. You’ve probably encountered a number of aggregate functions already. Some common ones are MAX and MIN, which give you the maximum and minimum value; AVG, which gives you the average value; SUM, which returns the sum of the values; and COUNT, which returns the total number of values. Each database engine may define different statistical functions, but nearly all provide those just mentioned. Continuing with the Active Record model named Person with an age column, you could find the highest age from your table as follows:
oldest_age = Person.calculate(:max, :age)
Note that calculate takes the max function’s name, as a symbol, as its first argument,but Active Record also has a number of convenience functions named after their respective
purposes: count, sum, minimum, maximum, and average. For example, the following two
lines are identical:
average_accident_count = Person.calculate(:avg, :accident_count)
average_accident_count = Person.average(:accident_count)

You have many choices for creating charts with Ruby. For example, you can do simple
charting in straight Hypertext Markup Language (HTML) and Cascading Style Sheets (CSS). Chapter 7 shows you how to use Markaby, a templating language for Ruby, to create your own HTML bar charts. Chapter 11 demonstrates how to use CSS helpers to create charts in Rails. Here, we’ll look at the Gruff and Scruffy graphing libraries, and then use Gruff in a couple of examples. Gruff (http://gruff.rubyforge.org/) provides a simple, Ruby-based interface to enter data and display details. After writing the code, you call a simple command to render the graph to a file. For example, if you had a collection of vintage guitars and wanted to display a simple bar chart with their values.

Generally, clients love spreadsheets. Often, they don’t have the expertise to manipulate data using SQL or a programming language like Ruby, but they do know how to perform calculations and analyze data using Microsoft Excel or a similar tool. If their data is directly delivered in their format of choice, they can skip a step and save time. (In fact,some less computer-savvy users may not realize that they can copy and paste data from a web page, so exporting to an Excel-compatible format may enable them to act on data in ways they could not before.)

p/s:- The article above is taken from the excerpt from the book Practical Reporting with Ruby and Rails , written by David Berube and publish by Apress year 2008.

h1

Deploying Rails Application – A Step by Step Guide…..

September 1, 2016

deploying ruby and rails application

Finally , got this book from National Library (PNM) last month. It tells us about how to deploy Ruby and Rails Application by using Mongrel , Rake , Capistrano installation and deploy it in rails.The keys to using Subversion with Rails are maintaining the right structure. You want to keep the right stuff under source control and keep the wrong stuff out. Setting up your application’s repository right the first time will save you time and frustration. A number of items in a Rails application do not belong in source control. Many a new Rails developer has clobbered his team’s database.yml file or checked in a 5MB log file. Both of these problems are with the Subversion setup, not with Rails or even the Rails developer. In an optimal setup, each developer would have their own database.yml file and log files. Each development or production instance of your application will have its own  version of these files, so they need to stay out of the code repository. You might already have a Subversion repository already, but I’ll assume you don’t and walk you through the entire process from scratch.

Many simple applications simply run off the trunk. Others will feel more comfortable deploying from a stable branch. Several great books address this topic better than I possibly could, but I do want you to get a feel for what’s involved. For detailed information on this topic, you should read Pragmatic Version Control [Mas05].
The changes you do on trunk might not be fully tested, or you could be in the middle of a major refactoring when an urgent bug report comes in. You need to have the ability to deploy a fixed version of the application without having to deploy the full set of changes since the last deployment. In Subversion, you can copy a branch of development to another name, and you can set up Capistrano to deploy from your stable branch instead of your development branch. Developers call this technique stable branch deployment. Let’s create the stable branch, which will be a copy of trunk:
$ svn copy –message “Create the stable branch” ֓
file:///home/ezra/deployit/trunk ֓
file:///home/ezra/deployit/branches/stable
Committed revision 234.

When you are ready to merge a set of changes to the stable branch, check the last commit message on the branch to know which revisions you need to merge:
$ svn log –revision HEAD:1 –limit 1 ֓
file:///home/ezra/deployit/branches/stable
——————————————————-
r422 | ezra | 2007-05-30 21:30:27 -0500 (30 may 2007) | 1 line
Merged r406:421 from trunk/
——————————————————-
Using the information in the log message, you can now merge all the changes to the branch:
$ svn merge –revision 422:436 ֓
file:///home/ezra/deployit/trunk .
A app/models/category.rb
M app/models/forum.rb
A db/migrate/009_create_category.rb

Finally, commit and deploy:
$ svn commit –message “Merged r422:436 from trunk/”
A app/models/category.rb

Transmitting file data ….
Committed revision 437.
$ cap deploy_with_migrations

You now have a good Subversion repository, and you can use it to
deploy. You’ve ignored the files that will break your developers’ will or just your application, and you’ve used common Rails conventions. Still, you should know a few things about developing with Subversion with successful deployment in mind.

You need to install Capistrano only on your development machine, not the server, because Capistrano runs commands on the server with a regular SSH session. If you’ve installed Rails, you probably already have RubyGems on your system. To install Capistrano, issue this command on your local machine:
local$ sudo gem install capistrano
Attempting local installation of ‘capistrano’
Local gem file not found: capistrano*.gem
Attempting remote installation of ‘capistrano’
Successfully installed capistrano-2.0.0
Successfully installed net-ssh-1.1.1
Successfully installed net-sftp-1.1.0
Installing ri documentation for net-ssh-1.1.1…
Installing ri documentation for net-sftp-1.1.0…
Installing RDoc documentation for net-ssh-1.1.1…
Installing RDoc documentation for net-sftp-1.1.0…
While you are installing gems, install the termios gem as well. (Sorry, termios is not readily available for Windows.) By default, Capistrano echoes your password to the screen when you deploy. Installing the termios gem keeps your password hidden from wandering eyes.

p/s:- This is a good book to read when a user or a programmer wants to deploy ruby and rails in their server or production machine. Some of these excerpt are taken from the book Deploying Rails Application – A Step By Step Guide written by Ezra Zygmuntowicz , Bruce Tate and Clinton Begin , publish by The Pragmatic Bookshelf.

The Ultimate CSS Reference.

Ultimate CSS Reference

Got this book from National Library (PNM) last month. It tells us about the usage , syntax and implementation of CSS in HTML website design and website programming. In XML documents, including XHTML served as XML, an external style sheet can be referenced using a processing instruction (PI). Such processing instructions are normally part of the XML prologue, coming after the XML declaration, but before the doctype declaration (if there is one) and the root element’s start tag. This example shows an XML prologue with a persistent style sheet (base.css), a preferred style sheet (default.css), and an alternative style sheet (custom.css):

<?xml version=”1.0″ encoding=”utf-8″?>
<?xml-stylesheet type=”text/css” href=”/base.css”?>
<?xml-stylesheet type=”text/css” href=”/default.css”
title=”Default”?>
<?xml-stylesheet type=”text/css” href=”/custom.css”
title=”Custom” alternate=”yes”?>

An external style sheet can’t contain SGML comments (<!– … –>) or HTML tags (including <style> and <style>). Nor can it use SGML character references (such as ©) or character entity references (such as &copy;). If you need to use special characters in an external style sheet, and they can’t be represented through the style sheet’s character encoding, specify them with CSS escape
notation.
■ The content type of the style element type in HTML is CDATA, which means that character references (numeric character references or character entity references) in an internal style sheet aren’t parsed. If you need to use special characters in an internal style sheet, and they can’t be represented with the document’s character encoding, specify them with CSS escape notation (p. 43).
In XHTML, the content type is #PCDATA, which means that character references are parsed, but only if the document’s served as XML.
■ Unlike in style elements, character references are parsed in style attributes, even in HTML.

Some of the early browser implementations of CSS were fraught with problems—they only supported parts of the specification, and in some cases, the implementation of certain CSS features didn’t comply with the specification. Today’s browsers generally provide excellent support for the latest CSS specification, even incorporating features that aren’t yet in the official specification, but will likely appear in the next version. Due to the   implementation deficiencies in early browsers, many old style sheets were written to work with the then-contemporary browsers rather than to comply.

So how does doctype sniffing work? Which declarations trigger standards mode, quirks mode, and almost standards mode? The document type definition reference, for HTML and XHTML, consists of the string PUBLIC followed by a formal public identifier (FPI), optionally followed by a formal system identifier (FSI), which is the URL for the DTD.
Here’s an example of a doctype declaration that contains both an FPI and an FSI:
<!DOCTYPE html PUBLIC “-//W3C//DTD HTML 4.01//EN”
http://www.w3.org/TR/html4/strict.dtd”&gt;
This example contains only the FPI:
<!DOCTYPE html PUBLIC “-//W3C//DTD HTML 4.01 Transitional//EN”>
Doctype sniffing works by detecting which of these parts are present in the doctype declaration. If an FPI is present, but an FSI isn’t, browsers generally choose quirks mode, since this was the common way of writing doctype declarations in the old days. Browsers also choose quirks mode if the doctype declaration is missing altogether—which used to be very common—or is malformed.

The @import at-rule is a mechanism for importing one style sheet into another. It should be followed by a URI value and a semicolon, but it’s possible to use a string value instead of the URI value. Where relative URIs are used, they’re interpreted as being relative to the importing style sheet. You can also specify one or more media types to which the imported style sheet applies—simply append a comma-separated list of media types to the URI.

Here’s an example of a media type specification:

@import url(/css/screen.css) screen, projection;

The @import rules in a style sheet must precede all rule sets. An @import rule that follows one or more rule sets will be ignored. As such, the example below shows an incorrect usage; because it appears after a rule set, the following @import rule will be ignored:

html {
background-color: #fff;
color: #000;
}
/* The following rule will be ignored */
@import url(“other.css”);

p/s:- There are lots of CSS syntax and CSS programming language that we can learn from. The book tells us some part of the CSS syntax and language that we can implement in HTML website coding. Some of the articles above is an excerpt from the book -The Ultimate CSS Reference written by Tommy Olsen and Paul O`Brien published by SitePoint Pty Ltd.

 

 

 

 

 

 

 

 

h1

Beginning C# 2008 Databases-From Novice to Professional….

July 30, 2016

beginning c# 2008 databases

Just got this book from PNM (National Library) this month. The book tells us about using programming in C# by making databases using SQL Server. This book focuses on accessing databases using C# 2008 as a development tool in conjunction with the new release of Visual Studio 2008 and .NET Framework 3.5. The SQL Server that it uses is SQL Server 2005.

SQL Server 2005 is one of the most advanced relational database management systems (RDBMSs) available. An exciting feature of SQL Sever 2005 is the integration of the .NET CLR into the SQL Server 2005 database engine , making it possible to implement database objects using managed code written in a .NET language such as Visual C# .NET or Visual Basic .NET. Besides this , SQL Server 2005 comes with multiple services such as analysis services , data transformation services , reporting services , notification services , and Service Broker. SQL Server 2005 offers one common environment , named SQL Server Management Studio , for both database developers and database administrators (DBAs).

Query by Example (QBE) is an alternative , graphical-based , point-and-click way of querying a database. It differs from SQL in that it has a graphical  user interface that allows users to write queries by creating example tables on the screen. QBE is especially suited for queries that are not too complex and can be expressed in terms of a few tables. Each database vendor offers its own implementation of SQL that conforms at some level to the standard but typically extends it. T-SQL does just that , and some of the SQL used in this book may not work if you try it with a database server other than SQL Server. Common table expressions (CTE) are new to SQL Serve 2005. A CTE is a named temporary result set that will be used by the FROM clause of a SELECT query. You then use the result set in any SELECT , INSERT , UPDATE , or DELETE query defined within the same scope as the CTE.

Most queries require information from more than one table. A join is a relational operation that produces a table by retrieving data from two ( not necessarily distinct) tables and matching their rows according to a join specification. Different types of joins exist , which you’ll look at individually , but keep in mind that every join is a binary operation , that is  , one table is joined to another , which may be the same table since tables can be joined to themselves. The join operation is a rich and somewhat complex topic.

Stored procedures can have parameters that can be used for input or output and single-integer return values (that default to zero) , and they can return zero or more result sets. They can be called from client programs or other stored procedures. Because stored procedures are so powerful , they are becoming the preferred mode for much database programming , particularly for multitier applications and web services , since (among their many benefits) they can dramatically reduce network traffic between clients and database servers.

p/s:- Quite an interesting book to read…Good book to read for people or students who want to learn C# in writing programming in databases…Some of the article is an excerpt from the book Beginning C# 2008 Databases – From Novice to Professional written by Vidya Vrat Agarwal and James Huddleston publish by Apress.

 

macs_all-in-one

Macs – All-In-One for Dummies.

Borrowed this book from National Library ( PNM ) this month..This book tells us about the history , architecture of the Mac Computers and Notebooks and the Mac Operating System.

The type of processor in your Mac can determine the applications (also known as apps or software) your Mac can run. Before buy any software , make sure that it can run on your computer. To identify the type of processor  used in your Mac , click the Apple menu in the upper-left corner of the screen and choose About this Mac. An About this Mac window appears , listing your processor as Intel Core 2 Duo , Core i3 , Core i5 , Core i7 , or Xeon. If your Mac doesn’t have one of the previously mentioned processors , you won’t be able to run Mac OSX Mavericks , version 10.9. This means that Core Solo and Core Duo models can’t run Mavericks. What’s more, to use Mavericks , you also need at least 2GB RAM (random access memory).

The dock is a rectangular strip that contains app , file , and folder icons. It lies in wait just out of sight either at the bottom or on the left or right side of the Desktop. When you hover the pointer in the area where the Dock is hiding , it appears , displaying the app , file , and folder icons stored there. When you use your Mac for the first time, the Dock already has icons for many of the pre-installed apps , as well as the Downloads folder and a Trash icon. You click an icon to elicit an action , which is usually to open an app or file , although you can also remove the icon from the Dock or activate a setting so that app opens when you log in to your Mac.

The Finder is an app that lets you find , copy , move , rename , delete , and open files and folders on your Mac. You can run apps directly from the Finder although the Dock makes finding and running apps you use frequently much more convenient. The Finder runs all the time. To switch to the Finder , click the Finder icon on the Dock ( the Picasso-like faces icon on the far left , or top , of the Dock) or just click an area of the Desktop outside any open windows. You know you’re in the Finder because the app menu is Finder , as opposed to Pages , System Preferences , or some other app name.

iCloud remotely stores and syncs data that you access from various devices – your Mac and other Apple devices , such as iPhones , iPads , and iPods , and PCs running Windows. Sign in to the same iCloud account on different devices , and the data for activated apps syncs ; that is , you find the same data on all your devices , and when you make a change on one device , it shows up on the others. The initial setup on your Mac or the creation of an iCloud Apple ID as explained previously activates your iCloud account and places a copy of the data from Mail , Contacts , Calendar , Notes , Reminders , and Safari from your Mac to the cloud ( that is , the Apple data storage equipment). Here , we show you how to work with the iCloud preferences ,sync devices , and sign in to and use the iCloud website.

p/s:- This books begins by focusing on the basics for all the aspects of using Mac with the  latest operating system , OSX 10.9 Mavericks. A good book to have…Some of the articles are excerpt taken from the book Macs – All-In-One for Dummies written by Joe Hutsko and Barbara Boyd , publish by John Wiley & Sons Inc , 4th Edition.