Thursday, October 6, 2011

force

Force is a quantitative description of the interaction between two physical bodies, such as an object and its environment. Force is proportional to acceleration. In calculus terms, force is the derivative of momentum with respect to time.

Contact force is defined as the force exerted when two physical objects come in direct contact with each other. Other forces, such as gravitation and electromagnetic forces, can exert themselves even across the empty vacuum of space.

The concept of force was originally defined by Sir Isaac Newton in his three laws of motion. He explained gravity as an attractive force between bodies that possessed mass (gravity within Einstein's general relativity doesn't require force).

In physics, force is what changes or tends to change a state of rest or motion in an object. Force causes objects to accelerate, add to the object's overall pressure, change direction, or change shape. Force is measured in Newtons. ('N').

According to Newton's Second Law of Motion, the formula for finding force is:

'''F = ma'''

where F is the force,
m is the mass of an object,
and a is the acceleration of the object.

If one sets a to the standard gravity g, then another formula can be found:

'''W = mg'''

where W is the weight of an object,
m is the mass of an object,
and g is the acceleration due to gravity at sea level. It is about 9.8m / s2.

Force is a vector, so it has both a magnitude and a direction.

Another equation that is useful is:

F = G(m1)(m2) / d2

F is force; G is the gravitational constant, which is used to show how gravity accelarates an object; m1 is the mass of one object; m2 is the mass of the second object; and d2 is the distance between the objects.

A force is always a push, pull, or a twist, and it affects objects by pushing them up, pulling them down, pushing them to a side, or by changing their motion or shape in some other way.



motion

In physics, motion is a change in position of an object with respect to time. Change in action is the result of an unbalanced force. Motion is typically described in terms of velocity, acceleration, displacement and time .[1] An object's velocity cannot change unless it is acted upon by a force, as described by Newton's first law. An object's momentum is directly related to the object's mass and velocity, and the total momentum of all objects in a closed system (one not affected by external forces) does not change with time, as described by the law of conservation of momentum.

A body which does not move is said to be at rest, motionless, immobile, stationary, or to have constant (time-invariant) position.

Motion is always observed and measured relative to a frame of reference. As there is no absolute frame of reference, absolute motion cannot be determined; this is emphasised by the term relative motion.[2] A body which is motionless relative to a given reference frame, is still moving relative to infinitely many other frames. Thus, everything in the universe is moving.[3]

More generally, the term motion signifies any temporal change in a physical system. For example, one can talk about motion of a wave or a quantum particle (or any other field) where the concept location does not apply.

Laws of Motion

In physics, motion in the universe is described through two sets of apparently contradictory laws of mechanics. Motions of all large scale and familiar objects in the universe (such as projectiles, planets, cells, and humans) are described by classical mechanics. Whereas the motion of very small atomic and sub-atomic objects is described by quantum mechanics.

[edit] Classical mechanics

Classical mechanics is used for describing the motion of macroscopic objects, from projectiles to parts of machinery, as well as astronomical objects, such as spacecraft, planets, stars, and galaxies. It produces very accurate results within these domains, and is one of the oldest and largest subjects in science, engineering and technology.

Classical mechanics is fundamentally based on Newton's Laws of Motion. These laws describe the relationship between the forces acting on a body and the motion of that body. They were first compiled by Sir Isaac Newton in his work PhilosophiƦ Naturalis Principia Mathematica, first published on July 5, 1687. His three laws are:

  1. In the absence of a net external force, a body either is at rest or moves with constant velocity.
  2. The net external force on a body is equal to the mass of that body times its acceleration; F = ma. Alternatively, force is proportional to the time derivative of momentum.
  3. Whenever a first body exerts a force F on a second body, the second body exerts a force −F on the first body. F and −F are equal in magnitude and opposite in direction.[4]

Newton's three laws of motion, along with his law of universal gravitation, explain Kepler's laws of planetary motion, which were the first to accurately provide a mathematical model or understanding orbiting bodies in outer space. This explanation unified the motion of celestial bodies and motion of objects on earth.

Classical mechanics was later further enhanced by Albert Einstein's special relativity and general relativity. Special relativity explains the motion of objects with a high velocity, approaching the speed of light; general relativity is employed to handle gravitation motion at a deeper level.

[edit] Quantum mechanics

Quantum mechanics is a set of principles describing physical reality at the atomic level of matter (molecules and atoms) and the subatomic (electrons, protons, and even smaller particles). These descriptions include the simultaneous wave-like and particle-like behavior of both matter and radiation energy, this described in the wave–particle duality.

In contrast to classical mechanics, where accurate measurements and predictions can be calculated about location and velocity, in the quantum mechanics of a subatomic particle, one can never specify its state, such as its simultaneous location and velocity, with complete certainty (this is called the Heisenberg uncertainty principle).

In addition to describing the motion of atomic level phenomena, quantum mechanics is useful in understanding some large scale phenomenon such as superfluidity, superconductivity, and biological systems, including the function of smell receptors and the structures of proteins.

[edit] Kinematics

Kinematics is a branch of classical mechanics devoted to the study of motion, but not the cause of the motion. As such it is concerned with the various types of motions.

Two classes of motion covered by kinematics are uniform motion and non-uniform motion. A body is said to be in uniform motion when it travels equal distances in equal intervals of time (i.e. at a constant speed). For example, a body travels 5 km in 1 hour and another 5 km in the next hour, and so on continuously. Uniform motion is closely associated with inertia as described in Newton's first law of motion. However, most familiar types of motion would be non-uniform motion, as most bodies are constantly being acted upon by many different force simultaneously, as such they do not travel equal distances in equal intervals of time. For example, a body travels 2 km in 25 minutes but takes 30 minutes to travel the next 2 km.

Types of motion

Monday, September 12, 2011

ASSEMBLY LANGUAGE

An assembly language is a low-level programming language for computers, microprocessors, microcontrollers, and other programmable devices. It implements a symbolic representation of the machine codes and other constants needed to program a given CPU architecture. This representation is usually defined by the hardware manufacturer, and is based on mnemonics that symbolize processing steps (instructions), processor registers, memory locations, and other language features. An assembly language is thus specific to a certain physical (or virtual) computer architecture. This is in contrast to most high-level programming languages, which, ideally, are portable.

A utility program called an assembler is used to translate assembly language statements into the target computer's machine code. The assembler performs a more or less isomorphic translation (a one-to-one mapping) from mnemonic statements into machine instructions and data. This is in contrast with high-level languages, in which a single statement generally results in many machine instructions.

Many sophisticated assemblers offer additional mechanisms to facilitate program development, control the assembly process, and aid debugging. In particular, most modern assemblers include a macro facility (described below), and are called macro assemblers.



Assembler

Compare with: Microassembler.

Typically a modern assembler creates object code by translating assembly instruction mnemonics into opcodes, and by resolving symbolic names for memory locations and other entities.[1] The use of symbolic references is a key feature of assemblers, saving tedious calculations and manual address updates after program modifications. Most assemblers also include macro facilities for performing textual substitution—e.g., to generate common short sequences of instructions as inline, instead of called subroutines.

Assemblers are generally simpler to write than compilers for high-level languages, and have been available since the 1950s. Modern assemblers, especially for RISC architectures, such as SPARC or POWER, as well as x86 and x86-64, optimize Instruction scheduling to exploit the CPU pipeline efficiently.

[edit] Number of passes

There are two types of assemblers based on how many passes through the source are needed to produce the executable program.

  • One-pass assemblers go through the source code once and assume that all symbols will be defined before any instruction that references them.
  • Two-pass assemblers create a table with all symbols and their values in the first pass, then use the table in a second pass to generate code. The assembler must at least be able to determine the length of each instruction on the first pass so that the addresses of symbols can be calculated.

The original reason for the use of one-pass assemblers was speed; however, modern computers perform two-pass assembly without unacceptable delay. The advantage of the two-pass assembler is that symbols can be defined anywhere in program source code, allowing programs to be defined in more logical and meaningful ways, making two-pass assembler programs easier to read and maintain.[2]

[edit] High-level assemblers

More sophisticated high-level assemblers provide language abstractions such as:

See Language design below for more details.

[edit] Use of the term

Note that, in normal professional usage, the term assembler is used to refer both to an assembly language, and to software which assembles an assembly-language program. Thus: "CP/CMS was written in S/360 assembler" as well as "ASM-H was a widely-used S/370 assembler."[citation needed]

[edit] Assembly language

A program written in assembly language consists of a series of mnemonic statements and meta-statements (known variously as directives, pseudo-instructions and pseudo-ops), comments and data. These are translated by an assembler to a stream of executable instructions that can be loaded into memory and executed. Assemblers can also be used to produce blocks of data from formatted and commented source code, to be used by other code.

Take, for example, the instruction that tells an x86/IA-32 processor to move an immediate 8-bit value into a register. The binary code for this instruction is 10110 followed by a 3-bit identifier for which register to use. The identifier for the AL register is 000, so the following machine code loads the AL register with the data 01100001.[4]

10110000 01100001 

This binary computer code can be made more human-readable by expressing it in hexadecimal as follows

B0 61 

Here, B0 means 'Move a copy of the following value into AL', and 61 is a hexadecimal representation of the value 01100001, which is 97 in decimal. Intel assembly language provides the mnemonic MOV (an abbreviation of move) for instructions such as this, so the machine code above can be written as follows in assembly language, complete with an explanatory comment if required, after the semicolon. This is much easier to read and to remember.

MOV AL, 61h       ; Load AL with 97 decimal (61 hex) 

At one time many assembly language mnemonics were three letter abbreviations, such as JMP for jump, INC for increment, etc. Modern processors have a much larger instruction set and many mnemonics are now longer, for example FPATAN for "floating point partial arctangent" and BOUND for "check array index against bounds". Many assembly language statements consist of an opcode mnemonic followed by a comma-separated list of data, arguments or parameters.[5]

The same mnemonic MOV refers to a family of related opcodes to do with loading, copying and moving data, whether these are immediate values, values in registers, or memory locations pointed to by values in registers. The opcode 10110000 (B0) copies an 8-bit value into the AL register, while 10110001 (B1) moves it into CL and 10110010 (B2) does so into DL. Assembly language examples for these follow.[4]

MOV AL, 1h        ; Load AL with immediate value 1 MOV CL, 2h        ; Load CL with immediate value 2 MOV DL, 3h        ; Load DL with immediate value 3 

The syntax of MOV can also be more complex as the following examples show.[6]

MOV EAX, [EBX]       ; Move the 4 bytes in memory at the address contained in EBX into EAX MOV [ESI+EAX], CL ; Move the contents of CL into the byte at address ESI+EAX 

In each case, the MOV mnemonic is translated directly into an opcode in the ranges 88-8E, A0-A3, B0-B8, C6 or C7 by an assembler, and the programmer does not have to know or remember which.[4]

Transforming assembly language into machine code is the job of an assembler, and the reverse can at least partially be achieved by a disassembler. Unlike high-level languages, there is usually a one-to-one correspondence between simple assembly statements and machine language instructions. However, in some cases, an assembler may provide pseudoinstructions (essentially macros) which expand into several machine language instructions to provide commonly needed functionality. For example, for a machine that lacks a "branch if greater or equal" instruction, an assembler may provide a pseudoinstruction that expands to the machine's "set if less than" and "branch if zero (on the result of the set instruction)". Most full-featured assemblers also provide a rich macro language (discussed below) which is used by vendors and programmers to generate more complex code and data sequences.

Each computer architecture and processor architecture usually has its own machine language. On this level, each instruction is simple enough to be executed using a relatively small number of electronic circuits. Computers differ by the number and type of operations they support. For example, a machine with a 64-bit word length would have different circuitry from a 32-bit machine. They may also have different sizes and numbers of registers, and different representations of data types in storage. While most general-purpose computers are able to carry out essentially the same functionality, the ways they do so differ; the corresponding assembly languages may reflect these differences.

Multiple sets of mnemonics or assembly-language syntax may exist for a single instruction set, typically instantiated in different assembler programs. In these cases, the most popular one is usually that supplied by the manufacturer and used in its documentation.

Language desiGN

Basic elements

There is a large degree of diversity in the way the authors of assemblers categorize statements and in the nomenclature that they use. In particular, some describe anything other than a machine mnemonic or extended mnemonic as a pseudo-operation (pseudo-op). A typical assembly language consists of 3 types of instruction statements that are used to define program operations:

  • Opcode mnemonics
  • Data sections
  • Assembly directives

[edit] Opcode mnemonics and extended mnemonics

Instructions (statements) in assembly language are generally very simple, unlike those in high-level language. Generally, a mnemonic is a symbolic name for a single executable machine language instruction (an opcode), and there is at least one opcode mnemonic defined for each machine language instruction. Each instruction typically consists of an operation or opcode plus zero or more operands. Most instructions refer to a single value, or a pair of values. Operands can be immediate (typically one byte values, coded in the instruction itself), registers specified in the instruction, implied or the addresses of data located elsewhere in storage. This is determined by the underlying processor architecture: the assembler merely reflects how this architecture works. Extended mnemonics are often used to specify a combination of an opcode with a specific operand, e.g., the System/360 assemblers use B as an extended mnemonic for BC with a mask of 15 and NOP for BC with a mask of 0.

Extended mnemonics are often used to support specialized uses of instructions, often for purposes not obvious from the instruction name. For example, many CPU's do not have an explicit NOP instruction, but do have instructions that can be used for the purpose. In 8086 CPUs the instruction xchg ax,ax is used for nop, with nop being a pseudo-opcode to encode the instruction xchg ax,ax. Some disassemblers recognize this and will decode the xchg ax,ax instruction as nop. Similarly, IBM assemblers for System/360 and System/370 use the extended mnemonics NOP and NOPR for BC and BCR with zero masks.

Some assemblers also support simple built-in macro-instructions that generate two or more machine instructions. For instance, with some Z80 assemblers the instruction ld hl,bc is recognized to generate ld l,c followed by ld h,b.[7] These are sometimes known as pseudo-opcodes.

[edit] Data sections

There are instructions used to define data elements to hold data and variables. They define the type of data, the length and the alignment of data. These instructions can also define whether the data is available to outside programs (programs assembled separately) or only to the program in which the data section is defined. Some assemblers classify these as pseudo-ops.

[edit] Assembly directives

Assembly directives, also called pseudo opcodes, pseudo-operations or pseudo-ops, are instructions that are executed by an assembler at assembly time, not by a CPU at run time. They can make the assembly of the program dependent on parameters input by a programmer, so that one program can be assembled different ways, perhaps for different applications. They also can be used to manipulate presentation of a program to make it easier to read and maintain.

(For example, directives would be used to reserve storage areas and optionally their initial contents.) The names of directives often start with a dot to distinguish them from machine instructions.

Symbolic assemblers let programmers associate arbitrary names (labels or symbols) with memory locations. Usually, every constant and variable is given a name so instructions can reference those locations by name, thus promoting self-documenting code. In executable code, the name of each subroutine is associated with its entry point, so any calls to a subroutine can use its name. Inside subroutines, GOTO destinations are given labels. Some assemblers support local symbols which are lexically distinct from normal symbols (e.g., the use of "10$" as a GOTO destination).

Some[which?] assemblers provide flexible symbol management, letting programmers manage different namespaces, automatically calculate offsets within data structures, and assign labels that refer to literal values or the result of simple computations performed by the assembler. Labels can also be used to initialize constants and variables with relocatable addresses.

Assembly languages, like most other computer languages, allow comments to be added to assembly source code that are ignored by the assembler. Good use of comments is even more important with assembly code than with higher-level languages, as the meaning and purpose of a sequence of instructions is harder to decipher from the code itself.

Wise use of these facilities can greatly simplify the problems of coding and maintaining low-level code. Raw assembly source code as generated by compilers or disassemblers—code without any comments, meaningful symbols, or data definitions—is quite difficult to read when changes must be made.

[edit] Macros

Many assemblers support predefined macros, and others support programmer-defined (and repeatedly re-definable) macros involving sequences of text lines in which variables and constants are embedded. This sequence of text lines may include opcodes or directives. Once a macro has been defined its name may be used in place of a mnemonic. When the assembler processes such a statement, it replaces the statement with the text lines associated with that macro, then processes them as if they existed in the source code file (including, in some assemblers, expansion of any macros existing in the replacement text).

Since macros can have 'short' names but expand to several or indeed many lines of code, they can be used to make assembly language programs appear to be far shorter, requiring fewer lines of source code, as with higher level languages. They can also be used to add higher levels of structure to assembly programs, optionally introduce embedded debugging code via parameters and other similar features.

Many assemblers have built-in (or predefined) macros for system calls and other special code sequences, such as the generation and storage of data realized through advanced bitwise and boolean operations used in gaming, software security, data management, and cryptography.

Macro assemblers often allow macros to take parameters. Some assemblers include quite sophisticated macro languages, incorporating such high-level language elements as optional parameters, symbolic variables, conditionals, string manipulation, and arithmetic operations, all usable during the execution of a given macro, and allowing macros to save context or exchange information. Thus a macro might generate a large number of assembly language instructions or data definitions, based on the macro arguments. This could be used to generate record-style data structures or "unrolled" loops, for example, or could generate entire algorithms based on complex parameters. An organization using assembly language that has been heavily extended using such a macro suite can be considered to be working in a higher-level language, since such programmers are not working with a computer's lowest-level conceptual elements.

Macros were used to customize large scale software systems for specific customers in the mainframe era and were also used by customer personnel to satisfy their employers' needs by making specific versions of manufacturer operating systems. This was done, for example, by systems programmers working with IBM's Conversational Monitor System / Virtual Machine (CMS/VM) and with IBM's "real time transaction processing" add-ons, CICS, Customer Information Control System, and ACP/TPF, the airline/financial system that began in the 1970s and still runs many large computer reservations systems (CRS) and credit card systems today.

It was also possible to use solely the macro processing abilities of an assembler to generate code written in completely different languages, for example, to generate a version of a program in COBOL using a pure macro assembler program containing lines of COBOL code inside assembly time operators instructing the assembler to generate arbitrary code.

This was because, as was realized in the 1960s, the concept of "macro processing" is independent of the concept of "assembly", the former being in modern terms more word processing, text processing, than generating object code. The concept of macro processing appeared, and appears, in the C programming language, which supports "preprocessor instructions" to set variables, and make conditional tests on their values. Note that unlike certain previous macro processors inside assemblers, the C preprocessor was not Turing-complete because it lacked the ability to either loop or "go to", the latter allowing programs to loop.

Despite the power of macro processing, it fell into disuse in many high level languages (a major exception being C/C++) while remaining a perennial for assemblers. This was because many programmers were rather confused by macro parameter substitution and did not disambiguate macro processing from assembly and execution[dubious ].

Macro parameter substitution is strictly by name: at macro processing time, the value of a parameter is textually substituted for its name. The most famous class of bugs resulting was the use of a parameter that itself was an expression and not a simple name when the macro writer expected a name. In the macro: foo: macro a load a*b the intention was that the caller would provide the name of a variable, and the "global" variable or constant b would be used to multiply "a". If foo is called with the parameter a-c, the macro expansion of load a-c*b occurs. To avoid any possible ambiguity, users of macro processors can parenthesize formal parameters inside macro definitions, or callers can parenthesize the input parameters.[8]

PL/I and C/C++ feature macros, but this facility can only manipulate text. On the other hand, homoiconic languages, such as Lisp, Prolog, and Forth, retain the power of assembly language macros because they are able to manipulate their own code as data.

[edit] Support for structured programming

Some assemblers have incorporated structured programming elements to encode execution flow. The earliest example of this approach was in the Concept-14 macro set, originally proposed by Dr. H.D. Mills (March, 1970), and implemented by Marvin Kessler at IBM's Federal Systems Division, which extended the S/360 macro assembler with IF/ELSE/ENDIF and similar control flow blocks.[9] This was a way to reduce or eliminate the use of GOTO operations in assembly code, one of the main factors causing spaghetti code in assembly language. This approach was widely accepted in the early 80s (the latter days of large-scale assembly language use).

A curious design was A-natural, a "stream-oriented" assembler for 8080/Z80 processors[citation needed] from Whitesmiths Ltd. (developers of the Unix-like Idris operating system, and what was reported to be the first commercial C compiler). The language was classified as an assembler, because it worked with raw machine elements such as opcodes, registers, and memory references; but it incorporated an expression syntax to indicate execution order. Parentheses and other special symbols, along with block-oriented structured programming constructs, controlled the sequence of the generated instructions. A-natural was built as the object language of a C compiler, rather than for hand-coding, but its logical syntax won some fans.

There has been little apparent demand for more sophisticated assemblers since the decline of large-scale assembly language development.[10] In spite of that, they are still being developed and applied in cases where resource constraints or peculiarities in the target system's architecture prevent the effective use of higher-level languages.[11]

[edit] Use of assembly language

[edit] Historical perspective

Assembly languages were first developed in the 1950s, when they were referred to as second generation programming languages. For example, SOAP (Symbolic Optimal Assembly Program) was a 1957 assembly language for the IBM 650 computer. Assembly languages eliminated much of the error-prone and time-consuming first-generation programming needed with the earliest computers, freeing programmers from tedium such as remembering numeric codes and calculating addresses. They were once widely used for all sorts of programming. However, by the 1980s (1990s on microcomputers), their use had largely been supplanted by high-level languages[citation needed], in the search for improved programming productivity. Today, although assembly language is almost always handled and generated by compilers, it is still used for direct hardware manipulation, access to specialized processor instructions, or to address critical performance issues. Typical uses are device drivers, low-level embedded systems, and real-time systems.

Historically, a large number of programs have been written entirely in assembly language. Operating systems were almost exclusively written in assembly language until the widespread acceptance of C in the 1970s and early 1980s. Many commercial applications were written in assembly language as well, including a large amount of the IBM mainframe software written by large corporations. COBOL and FORTRAN eventually displaced much of this work, although a number of large organizations retained assembly-language application infrastructures well into the 90s.

Most early microcomputers relied on hand-coded assembly language, including most operating systems and large applications. This was because these systems had severe resource constraints, imposed idiosyncratic memory and display architectures, and provided limited, buggy system services. Perhaps more important was the lack of first-class high-level language compilers suitable for microcomputer use. A psychological factor may have also played a role: the first generation of microcomputer programmers retained a hobbyist, "wires and pliers" attitude.

In a more commercial context, the biggest reasons for using assembly language were minimal bloat (size), minimal overhead, greater speed, and reliability.

Typical examples of large assembly language programs from this time are IBM PC DOS operating systems and early applications such as the spreadsheet program Lotus 1-2-3, and almost all popular games for the Atari 800 family of home computers. Even into the 1990s, most console video games were written in assembly, including most games for the Mega Drive/Genesis and the Super Nintendo Entertainment System[citation needed]. According to some industry insiders, the assembly language was the best computer language to use to get the best performance out of the Sega Saturn, a console that was notoriously challenging to develop and program games for.[12] The popular arcade game NBA Jam (1993) is another example. On the Commodore 64, Amiga, Atari ST, as well as ZX Spectrum home computers, assembler has long been the primary development language. This was in large part because BASIC dialects on these systems offered insufficient execution speed, as well as insufficient facilities to take full advantage of the available hardware on these systems. Some systems, most notably Amiga, even have IDEs with highly advanced debugging and macro facilities, such as the freeware ASM-One assembler, comparable to that of Microsoft Visual Studio facilities (ASM-One predates Microsoft Visual Studio).

The Assembler for the VIC-20 was written by Don French and published by French Silk. At 1639 bytes in length, its author believes it is the smallest symbolic assembler ever written. The assembler supported the usual symbolic addressing and the definition of character strings or hex strings. It also allowed address expressions which could be combined with addition, subtraction, multiplication, division, logical AND, logical OR, and exponentiation operators.[13]

[edit] Current usage

There have always been debates over the usefulness and performance of assembly language relative to high-level languages. Assembly language has specific niche uses where it is important; see below. But in general, modern optimizing compilers are claimed[citation needed] to render high-level languages into code that can run as fast as hand-written assembly, despite the counter-examples that can be found.[14][15][16] The complexity of modern processors and memory sub-system makes effective optimization increasingly difficult for compilers, as well as assembly programmers.[17][18] Moreover, and to the dismay of efficiency lovers, increasing processor performance has meant that most CPUs sit idle most of the time,[citation needed] with delays caused by predictable bottlenecks such as I/O operations and paging. This has made raw code execution speed a non-issue for many programmers.

There are some situations in which practitioners might choose to use assembly language, such as when:

  • a stand-alone binary executable of compact size is required, i.e. one that must execute without recourse to the run-time components or libraries associated with a high-level language; this is perhaps the most common situation. These are embedded single-tasking programs, and use only a relatively small amount of memory. Examples include firmware for telephones, automobile fuel and ignition systems, air-conditioning control systems, security systems, and sensors.
    • particularly, a system with severe resource constraints (e.g., an embedded system) must be hand-coded to maximize the use of limited resources; but this is becoming less common as processor price decreases and performance improves.
  • interacting directly with the hardware, for example in device drivers and interrupt handlers.
  • using processor-specific instructions not implemented in a compiler. A common example is the bitwise rotation instruction at the core of many encryption algorithms.
  • creating vectorized functions for programs in higher-level languages such as C. In the higher-level language this is sometimes aided by compiler intrinsic functions which map directly to SIMD mnemonics, but nevertheless result in a one-to-one assembly conversion specific for the given vector processor.
  • extreme optimization is required, e.g., in an inner loop in a processor-intensive algorithm. Game programmers take advantage of the abilities of hardware features in systems, enabling games to run faster. Also large scientific simulations require highly optimized algorithms, e.g. linear algebra with BLAS[14][19] or discrete cosine transformation (e.g. SIMD assembly version from x264[20])
  • no high-level language exists, on a new or specialized processor, for example.
  • programs need precise timing such as
    • real-time programs that need precise timing and responses, such as simulations, flight navigation systems, and medical equipment. For example, in a fly-by-wire system, telemetry must be interpreted and acted upon within strict time constraints. Such systems must eliminate sources of unpredictable delays, which may be created by (some) interpreted languages, automatic garbage collection, paging operations, or preemptive multitasking. However, some higher-level languages incorporate run-time components and operating system interfaces that can introduce such delays. Choosing assembly or lower-level languages for such systems gives programmers greater visibility and control over processing details.
    • cryptographic algorithms that must always take strictly the same time to execute, preventing timing attacks.
  • complete control over the environment is required, in extremely high security situations where nothing can be taken for granted.
  • writing computer viruses, bootloaders, certain device drivers, or other items very close to the hardware or low-level operating system.
  • writing instruction set simulators for monitoring, tracing and debugging where additional overhead is kept to a minimum
  • reverse-engineering and modifying program files such as
    • existing binaries that may or may not have originally been written in a high-level language, for example when trying to recreate programs for which source code is not available or has been lost, or cracking copy protection of proprietary software.
    • video games (also termed ROM hacking), which is possible via several methods. The most widely employed is altering program code at the assembly language level.
  • writing self modifying code, to which assembly language lends itself well.
  • writing games and other software for graphing calculators.[21]
  • writing compiler software that generates assembly code; the programmers must be expert assembly language programmers to generate correct assembly code.

Assembly language is still taught in most computer science and electronic engineering programs. Although few programmers today regularly work with assembly language as a tool, the underlying concepts remain very important. Such fundamental topics as binary arithmetic, memory allocation, stack processing, character set encoding, interrupt processing, and compiler design would be hard to study in detail without a grasp of how a computer operates at the hardware level. Since a computer's behavior is fundamentally defined by its instruction set, the logical way to learn such concepts is to study an assembly language. Most modern computers have similar instruction sets. Therefore, studying a single assembly language is sufficient to learn: I) the basic concepts; II) to recognize situations where the use of assembly language might be appropriate; and III) to see how efficient executable code can be created from high-level languages. [22] This is analogous to children needing to learn the basic arithmetic operations (e.g., long division), although calculators are widely used for all except the most trivial calculations.

[edit] Typical applications

Hard-coded assembly language is typically used in a system's boot ROM (BIOS on IBM-compatible PC systems). This low-level code is used, among other things, to initialize and test the system hardware prior to booting the OS, and is stored in ROM. Once a certain level of hardware initialization has taken place, execution transfers to other code, typically written in higher level languages; but the code running immediately after power is applied is usually written in assembly language. The same is true of most boot loaders.

Many compilers render high-level languages into assembly first before fully compiling, allowing the assembly code to be viewed for debugging and optimization purposes. Relatively low-level languages, such as C, often provide special syntax to embed assembly language directly in the source code. Programs using such facilities, such as the Linux kernel, can then construct abstractions using different assembly language on each hardware platform. The system's portable code can then use these processor-specific components through a uniform interface.

Assembly language is also valuable in reverse engineering, since many programs are distributed only in machine code form, and machine code is usually easy to translate into assembly language and carefully examine in this form, but very difficult to translate into a higher-level language. Tools such as the Interactive Disassembler make extensive use of disassembly for such a purpose.

One niche that makes use of assembly language is the demoscene. Certain competitions require contestants to restrict their creations to a very small size (e.g. 256B, 1KB, 4KB or 64 KB), and assembly language is the language of choice to achieve this goal.[23] When resources, especially CPU processing-constrained systems, like the earlier Amiga models, and the Commodore 64, are a concern, assembler coding is a must. Optimized assembler code is written "by hand" and instructions are sequenced manually by programmers in an attempt to minimize the number of CPU cycles used. The CPU constraints are so great that every CPU cycle counts. However, using such methods has enabled systems like the Commodore 64 to produce real-time 3D graphics with advanced effects, a feat which might be considered unlikely or even impossible for a system with a 1.02MHz processor.[citation needed]

[edit] Related terminology

  • Assembly language or assembler language is commonly called assembly, assembler, ASM, or symbolic machine code. A generation of IBM mainframe programmers called it ALC for Assembly Language Code or BAL[24] for Basic Assembly Language.
Note: Calling the language assembler is of course potentially confusing and ambiguous, since this is also the name of the utility program that translates assembly language statements into machine code. Some may regard this as imprecision or error. However, this usage has been common among professionals and in the literature for decades.[25] Similarly, some early computers called their assembler their assembly program.[26])
  • The computational step where an assembler is run, including all macro processing, is termed assembly time.
  • The use of the word assembly dates from the early years of computers (cf. short code, speedcode).
  • A cross assembler (see cross compiler) is functionally just an assembler. This term is used to stress that the assembler is run on a computer or operating system of different type and incompatible with the system on which the resulting code is to run. Cross-assembling may be necessary if the target system cannot run an assembler itself, as is typically the case for small embedded systems. A cross assembler must provide for or interface to facilities to transport the code to the target processor, e.g. to reside in flash or EPROM memory. It generates a binary image, or Intel HEX file rather than an object file.
  • An assembler directive or pseudo-opcode is a command given to an assembler. These directives may do anything from telling the assembler to include other source files, to telling it to allocate memory for constant data. List of assemblers for different computer architecture.

The following page has a list of different assemblers for the different computer architectures, along with any associated information for that specific assembler:

[edit] Further details

For any given personal computer, mainframe, embedded system, and game console, both past and present, at least one – possibly dozens – of assemblers have been written. For some examples, see the list of assemblers.

On Unix systems, the assembler is traditionally called as, although it is not a single body of code, being typically written anew for each port. A number of Unix variants use GAS.

Within processor groups, each assembler has its own dialect. Sometimes, some assemblers can read another assembler's dialect, for example, TASM can read old MASM code, but not the reverse. FASM and NASM have similar syntax, but each support different macros that could make them difficult to translate to each other. The basics are all the same, but the advanced features will differ.[27]

Also, assembly can sometimes be portable across different operating systems on the same type of CPU. Calling conventions between operating systems often differ slightly or not at all, and with care it is possible to gain some portability in assembly language, usually by linking with a C library that does not change between operating systems. An instruction set simulator (which would ideally be written in an assembler language) can, in theory, process the object code/ binary of any assembler to achieve portability even across platforms (with an overhead no greater than a typical bytecode interpreter). This is essentially what microcode achieves when a hardware platform changes internally.

For example, many things in libc depend on the preprocessor to do OS-specific, C-specific things to the program before compiling. In fact, some functions and symbols are not even guaranteed to exist outside of the preprocessor. Worse, the size and field order of structs, as well as the size of certain typedefs such as off_t, are entirely unavailable in assembly language without help from a configure script, and differ even between versions of Linux, making it impossible to portably call functions in libc other than ones that only take simple integers and pointers as parameters. To address this issue, FASMLIB project provides a portable assembly library for Win32 and Linux platforms, but it is yet very incomplete.[28]

Some higher level computer languages, such as C and Borland Pascal, support inline assembly where sections of assembly code, in practice usually brief, can be embedded into the high level language code. The Forth language commonly contains an assembler used in CODE words.

An emulator can be used to debug assembly-language programs.

Example listing of assembly language source code

Address Label Instruction (AT&T syntax) Object code[29]


.begin


.org 2048

a_start .equ 3000
2048
ld length,%
2064
be done 00000010 10000000 00000000 00000110
2068
addcc %r1,-4,%r1 10000010 10000000 01111111 11111100
2072
addcc %r1,%r2,%r4 10001000 10000000 01000000 00000010
2076
ld %r4,%r5 11001010 00000001 00000000 00000000
2080
ba loop 00010000 10111111 11111111 11111011
2084
addcc %r3,%r5,%r3 10000110 10000000 11000000 00000101
2088 done: jmpl %r15+4,%r0 10000001 11000011 11100000 00000100
2092 length: 20 00000000 00000000 00000000 00010100
2096 address: a_start 00000000 00000000 00001011 10111000


.org a_start
3000












a: