Showing posts with label Computer Organization and Architecture. Show all posts
Showing posts with label Computer Organization and Architecture. Show all posts

Describe PipeLining, What is pipelining ans it's working in Computer, pipelines performace

Question:

Describe PipeLining, What is pipelining ans it's working in Computer.

Solution:

What is pipelining:

In computer architecture, by which different units do the same work of fetching instruction and then completing the process of store the results and the process is called Pipelining.


Pipelinig Stages:

Pipelining has two stages basically. They are:
  1. Fetch Instruction and
  2. Execute Instruction
The first stage fetches an instruction and buffers it.When the second stage is free, the first stage passes it the buffered instruction.While the second stage is executing the instruction, the first stage takes advantage of any unused memory cycles to fetch and buffer the next instruction.


How to speedup in pipelining:

To gain further speedup, the pipeline must have more stages.

Let us consider the following decomposition of the instruction processing.
• Fetch instruction (FI): Read the next expected instruction into a buffer.
• Decode instruction (DI): Determine the opcode and the operand specifiers.
• Calculate operands (CO): Calculate the effective address of each source operand. This may involve displacement, register indirect, indirect, or other forms of address calculation.
• Fetch operands (FO): Fetch each operand from memory. Operands in registers need not be fetched.
• Execute instruction (EI): Perform the indicated operation and store the result, if any, in the specified destination operand location.
• Write operand (WO): Store the result in memory.


Here, the fetch instruction will be fetch an instruction and then it'll send it to the next stage to the DI(Decode instruction). Then it'll be free. So, it'll be able to fetch the next instruction when the previous instruction is being processed. In this way, DI, FO, EI and WO will be ready to start a new work when it becomes free.

By the above way it can be obtained that when the last instruction is fetched, the first instruction result is already stored.

So, we can say that if different functional units perform independent work in a synchronous, simultaneous way, then it is called pipelining.

See an example of non-pipe-lining and pipe-lining 


 
In non pipelining total cycle needs 8 but in pipelining total cycle needs 5 and that's the real work of pipelining .


How pipelining work or execute it's program in processor to speed up the processor:

See the below the six stage of pipelining:

Six-Stage CPU Instruction Pipeline
Six-Stage CPU Instruction Pipeline


What is the performance of PipeLine or How to determine the performance of pipelining:

The cycle time of an instruction pipeline is the time needed to advance a set of instructions one stage
through the pipeline; The cycle time can be determined as,


Where,
ti = time delay of the circuitry in the ith stage of the pipeline
tm = maximum stage delay (delay through stage which experiences the largest delay)
k = number of stages in the instruction pipeline
d = time delay of a latch, needed to advance signals and data from one stage to the next


Now, suppose that n instructions are processed, with no branches. Let Tk,n be the total time required for a pipeline with k stages to execute n instructions.Then,

A total of k cycles are required to complete the execution of the first instruction, and the remaining instructions require cycles,
14 = [6 + (9 - 1)]

Now consider a processor with equivalent functions but no pipeline, and assume that the instruction cycle time kt is The speedup factor for the instruction pipeline compared to execution without the pipeline is defined as,


Where Sk = Speed up factor of pipelining.





Tags: What is pipelining , Computer Organization and Architecture,Describe PipeLining, What is pipelining ans it's working in Computer, pipelines performance, Pipe-Lining performance, What is Pipe-lining, What is Pipe-Line, About Pipe Line in Computer architecture


Read More

Explain Direct-Mapping Cache Organization with diagram-Solution

Question:

Explain Direct-Mapping Cache Organization with diagram.

Solution:

First What is Mapping functions and why these needed?
Because there are fewer cache lines than main memory blocks, an algorithm is needed for mapping main memory blocks into cache lines. Further, a means is needed for determining which main memory block currently occupies a cache line. The choice of the mapping function dictates how the cache is organized.Three techniques can be used: direct, associative, and set associative.


DIRECT MAPPING 

The simplest technique, known as direct mapping, maps each block of main memory into only one possible cache line. The mapping is expressed as,

i = j modulo m

Where,
i  = cache line number
j  = main memory block number
m = number of lines in the cache

Figure 1 shows the mapping for the first m blocks of main memory.

Explain Direct-Mapping Cache Organization with diagram-Solution
Figure-1-blocks of main memory

Each block of main memory maps into one unique line of the cache.The next blocks of main memory map into the cache in the same fashion; that is, block Bm of main memory maps into line L0 of cache, block Bm+1 maps into line L1, and so on.

The mapping function is easily implemented using the main memory address.
Figure 2 illustrates the general mechanism.




For purposes of cache access, each main memory address can be viewed as consisting of three fields. The least significant w bits identify a unique word or byte within a block of main memory; in most contemporary machines, the address is at the byte level.The remaining s bits specify one of the 2^s blocks of main memory.The cache logic interprets these s bits as a tag of (s + r) bits (most significant portion) and a line field of r bits.This latter field identifies one of the m = 2r lines of the cache.To summarize,

• Address length (s + w) bits
• Number of addressable units 2s+w words or bytes
• Block size line size 2w words or bytes
• Number of blocks in main memory = 2(s+w) / 2w = 2s
• Number of lines in cache m = 2r
• Size of cache 2r+w words or bytes
• Size of tag = (s - r) bits




Tags: Computer Organization and Architecture,  Direct-Mapping Cache Organization, Direct mapping example, Cache Organization with diagram-Solution, mapping functions


Read More

What is the general relationship among access time, memory cost, and capacity?-Solution

Question is:

What is the general relationship among access time, memory cost, and capacity?

Solution is:

There is a relationship among Access time, Memory cost per bit and the storage capacity for that access times are given below-

Faster access time, 
 Greater cost per bit; 
 Greater capacity, 

And in opposite sense,

Slower access time,
  Smaller cost per bit,
  Greater capacity.
  .

That means - 
1) If computer can access something in so fast then it's cost per bit will be also greater and capacity of storage also be greater.
2) If computer can/need access something in slower then it's cost per bit will be smaller and the capacity remain constant and will be greater.



Tags: Computer Organization and Architecture, general relationship among access time, memory cost, and capacity, general relationship among access time, memory cost, and capacity solution,


Read More

What are the differences among sequential access, direct access, random access and associative access?

Question is:

What are the differences among sequential access, direct access, random access and associative access?

Solution is:

There are four types of method to access units of data. They are Sequential access, Direct access and random access, associative access. The difference among them are given below:

See first about these access in details then come to the difference.

Sequential access: 
    Memory is organized into units of data, called records. Access must be made in a specific linear sequence. Stored addressing information is used to separate records and assist in the retrieval process. A shared read– write mechanism is used, and this must be moved from its current location to the desired location, passing and rejecting each intermediate record.Thus, the time to access an arbitrary record is highly variable. Tape units are sequential access.

Direct access: 
    As with sequential access, direct access involves a shared read–write mechanism. However, individual blocks or records have a unique address based on physical location. Access is accomplished by direct access to reach a general vicinity plus sequential searching, counting, or waiting to reach the final location. Again, access time is variable. Disk units are direct access.

Random access: 
    Each addressable location in memory has a unique, physically wired-in addressing mechanism. The time to access a given location is independent of the sequence of prior accesses and is constant. Thus, any location can be selected at random and directly addressed and accessed. Main memory and some cache systems are random access.

Associative access: 
    This is a random access type of memory that enables one to make a comparison of desired bit locations within a word for a specified match, and to do this for all words simultaneously. Thus, a word is retrieved based on a portion of its contents rather than its address. As with ordinary random-access memory, each location has its own addressing mechanism, and retrieval time is constant independent of location or prior access patterns. Cache memories may employ associative access.


So,the main difference or key terms of different among sequential access, direct access and random access are:

Sequential access: Memory is organized into units of data, called records. Access must be made in a specific linear sequence.
Direct access: Individual blocks or records have a unique address based on physical location. Access is accomplished by direct access to reach a general vicinity plus sequential searching, counting, or waiting to reach the final location.
Random access: Each addressable location in memory has a unique, physically wired-in addressing
mechanism. The time to access a given location is independent of the sequence of prior accesses and is constant.


Tags: Computer Organization and architecture, differences among sequential access, direct access, random access and associative access, access of data of memory units, data accessing in computer memory


Read More

What is the Key Characteristics of Computer Memory Systems?

Question is:

What is the Key Characteristics of Computer Memory Systems?

Solution is:

The most important Key Characteristics of Computer Memory Systems are (follow the table for see the full scenario):

Location
         Internal (e.g. processor registers, main memory, cache)
         External (e.g. optical disks, magnetic disks, tapes)
Capacity
        Number of words
        Number of bytes
Unit of Transfer
        Word
        Block
Access Method
        Sequential
        Direct
        Random
       Associative
Performance
      Access time
      Cycle time
      Transfer rate
Physical Type
     Semiconductor
     Magnetic
     Optical
     Magneto-optical
Physical Characteristics
      Volatile/nonvolatile
      Erasable/nonerasable
Organization
     Memory modules

Now we'll illustrate each of the characteristics of Computer memory system one by one-

The term location refers to whether memory is internal and external to the computer. 
  Internal memory is often equated with main memory. But there are other forms of internal memory.The processor requires its own local memory, in the form of registers. Further, as we shall see, the control unit portion of the processor may also require its own internal memory. Cache is another form of internal memory.
External memory consists of peripheral storage devices, such as disk and tape, that are accessible to the processor via I/O controllers.

An obvious characteristic of memory is its capacity. 
  For internal memory, this is typically expressed in terms of bytes (1 byte 8 bits) or words. Common word lengths are 8, 16, and 32 bits.
  External memory capacity is typically expressed in terms of bytes.

A related concept is the unit of transfer.
  For internal memory, the unit of transfer is equal to the number of electrical lines into and out of the memory module.This may be equal to the word length, but is often larger, such as 64, 128, or 256 bits. To clarify this point, consider three related concepts for internal memory:
• Word: The “natural” unit of organization of memory. The size of the word is typically equal to the number of bits used to represent an integer and to the instruction length. Unfortunately, there are many exceptions. For example, the CRAY C90 (an older model CRAY supercomputer) has a 64-bit word length but uses a 46-bit integer representation. The Intel x86 architecture has a wide variety of instruction lengths, expressed as multiples of bytes, and a word size of 32 bits.
• Addressable units: In some systems, the addressable unit is the word. However, many systems allow addressing at the byte level. In any case, the relationship between the length in bits A of an address and the number N of addressable units is 2A N.
• Unit of transfer: For main memory, this is the number of bits read out of or written into memory at a time.The unit of transfer need not equal a word or an addressable unit. For external memory, data are often transferred in much larger units than a word, and these are referred to as blocks.


Another distinction among memory types is the method of accessing units of data.These include the following:
• Sequential access: Memory is organized into units of data, called records. Access must be made in a specific linear sequence. Stored addressing information is used to separate records and assist in the retrieval process. A shared read– write mechanism is used, and this must be moved from its current location to the desired location, passing and rejecting each intermediate record.Thus, the time to access an arbitrary record is highly variable. Tape units are sequential access.
• Direct access: As with sequential access, direct access involves a shared read–write mechanism. However, individual blocks or records have a unique address based on physical location. Access is accomplished by direct access to reach a general vicinity plus sequential searching, counting, or waiting to reach the final location. Again, access time is variable. Disk units are direct access.
• Random access: Each addressable location in memory has a unique, physically wired-in addressing mechanism. The time to access a given location is independent of the sequence of prior accesses and is constant. Thus, any location can be selected at random and directly addressed and accessed. Main memory and some cache systems are random access.
• Associative: This is a random access type of memory that enables one to make a comparison of desired bit locations within a word for a specified match, and to do this for all words simultaneously. Thus, a word is retrieved based on a portion of its contents rather than its address. As with ordinary random-access memory, each location has its own addressing mechanism, and retrieval time is
constant independent of location or prior access patterns. Cache memories may employ associative access.



From a user’s point of view, the two most important characteristics of memory are capacity and performance.Three performance parameters are used:

• Access time (latency): For random-access memory, this is the time it takes to perform a read or write operation, that is, the time from the instant that an address is presented to the memory to the instant that data have been stored or made available for use. For non-random-access memory, access time is the time it takes to position the read–write mechanism at the desired location.

• Memory cycle time: This concept is primarily applied to random-access memory and consists of the access time plus any additional time required before a second access can commence. This additional time may be required for transients to die out on signal lines or to regenerate data if they are read destructively. Note that memory cycle time is concerned with the system bus, not the
processor.

• Transfer rate: This is the rate at which data can be transferred into or out of a memory unit. For random-access memory, it is equal to 1/(cycle time). For non-random-access memory, the following relationship holds:
Tn = TA + n/R
where,
Tn  = Average time to read or write N bits
TA  = Average access time
n  = Number of bits
R  = Transfer rate, in bits per second (bps)

A variety of physical types of memory have been employed. The most common today are semiconductor memory, magnetic surface memory, used for disk and tape, and optical and magneto-optical.

Several physical characteristics of data storage are important.
  In a volatile memory, information decays naturally or is lost when electrical power is switched off.
  In a nonvolatile memory, information once recorded remains without deterioration until deliberately changed; no electrical power is needed to retain information.
  Magnetic-surface memories are nonvolatile.
  Semiconductor memory may be either volatile or nonvolatile. Nonerasable memory cannot be altered, except by destroying the storage unit.
  Semiconductor memory of this type is known as read-only memory (ROM). Of necessity, a practical nonerasable memory must also be nonvolatile.

For random-access memory, the organization is a key design issue. By organization
is meant the physical arrangement of bits to form words.





Tags: Computer Architectures solution, What is the Key Characteristics of Computer Memory Systems, Computer memory characteristics


Read More

Computer Organization and Architecture Designing For Performance | 8th Edition PDF By William Stallings With Solution

Problem is:

Download PDF of Computer Organization and Architecture Designing For Performance | 8th Edition By William Stallings. and Computer organization and architecture solution pdf


Solution is:

Computer Organization and Architecture Designing For Performance | 8th Edition By William Stallings

Book Name : Computer Organization and Architecture Designing For Performance
Edition : 8th Edition
Book Author Name : William Stallings
Book Download Size : 3.2 MB
Book Total Page : 881 pages
Book Front page :
Author description: William Stallings
Book Download Link : Download From Main Link
Please..!! Click SKIP AD Button if ad appear
Book Mirror Download Link : Download From Mirror Link



Computer Organization and Architecture Designing For Performance | 8th Edition By William Stallings Solution

Book Name : Computer Organization and Architecture Solution PDF
Edition : 8th Edition Solution pdf
Solution Book Author Name : William Stallings
Book Download Size : 3.8 MB
Book Total Page : 134 Pages solutions
Solution Book Front page :
Book Download Link : Download From Main Link
Please..!! Click SKIP AD Button if ad appear
Book Mirror Download Link : Download From Mirror Link



Tags: Download Computer Organization and Architecture Ebook, Computer Organization and architecture solution download, computer organization and architecture pdf, computer organization and architecture by William Stallings pdf book, Computer Organization ebook,pdf


Read More