computer processingmost of the earliest computer


Computer Processing:

Most of the earliest computer memories have been based on physical elements which can exist in just one of the two states (on or off): such an element corresponds to one bit of information. Binary symbols are used because electronic devices can store and process them rapidly and cheaply. The physical devices used have been changed greatly,, but the principles of information representation mid manipulation within digital computers have remained essentially the same. The size of a computer can be described by the number of bits (binary digits) which its memory contains, but for most purposes larger units of storage are used to characterise machines. Generally, these are denoted by a byte which normally consists of 8 bits, and is sufficient to represent one character, and a word which can be of length -8, 16, 32 bits, is the smallest unit of storage to which most of the computer's instructions can be applied. Memories are usually described as being of a size measured in Kilobytes (K bytes) or Megabytes (M bytes). The sets or patterns of bits which make up bytes and words are used to represent all information within a computer, whether it is program instructions or data items.

The exact method of representation will vary from one machine to another, particularly with respect to instructions formats. An 8 bit byte can accommodate 256 different bit patterns and so it is sufficient to allow for most of the characters which could be required to be printed, e.g., upper and lower case letters A-Z, together with the numbers 0-9 and also a range of punctuation symbols, with allowance for non-printing characters such as end-of-line. The set of bit patterns corresponding to a set of characters is called a character code. Standard codes are ASCII  and EBCDIC. Numbers are normally represented by one or more computer words. Incase of integers, the set of 8,16, or 32 bits (as per the computer's word length) is treated as a binary integer (i.e., encoded in the notation of binary arithmetic). A real number is represented by dividing a computer word into two components: a simple decimal number (i.e., the mantissa portion) together with the power to which it must be raised (i.e., the exponent).  

Request for Solution File

Ask an Expert for Answer!!
Computer Engineering: computer processingmost of the earliest computer
Reference No:- TGS0175701

Expected delivery within 24 Hours