BOOTSTRAPING

 

I was looking into the following:

http://stackoverflow.com/questions/4265716/how-do-you-write-a-compiler-for-a-language-in-that-language

http://stackoverflow.com/questions/13537/bootstrapping-a-language

http://www.rano.org/bcompiler.html : Bootstrapping a simple compiler from nothing

http://stackoverflow.com/questions/1493747/bootstrapping-a-compiler-why

http://en.wikipedia.org/wiki/Bootstrapping

Bootstrapping or booting refers to a group of metaphors which refer to a self-sustaining process that proceeds without external help.

process vs thread

 

http://stackoverflow.com/questions/200469/what-is-the-difference-between-a-process-and-a-thread

  1. Both processes and threads are independent sequences of execution. The typical difference is that threads (of the same process) run in a shared memory space, while processes run in separate memory spaces. I’m not sure what “hardware” vs “software” threads might be referring to. Threads are an operating environment feature, rather than a CPU feature (though the CPU typically has operations that make threads efficient). Erlang uses the term “process” because it does not expose a shared-memory multiprogramming model. Calling them “threads” would imply that they have shared memory.
  2. Microsoft Windows supports preemptive multitasking, which creates the effect of simultaneous execution of multiple threads from multiple processes. On a multiprocessor computer, the system can simultaneously execute as many threads as there are processors on the computer.
  • Process
    Each process provides the resources needed to execute a program. A process has a virtual address space, executable code, open handles to system objects, a security context, a unique process identifier, environment variables, a priority class, minimum and maximum working set sizes, and at least one thread of execution. Each process is started with a single thread, often called the primary thread, but can create additional threads from any of its threads.
  •  An executing instance of a program is called a process.
  • Some operating systems use the term ‘task‘ to refer to a program that is being executed.
  • A process is always stored in the main memory also termed as the primary memory or random access memory.
  • Therefore, a process is termed as an active entity. It disappears if the machine is rebooted.
  • Several process may be associated with a same program.
  • On a multiprocessor system, multiple processes can be executed in parallel.
  • On a uni-processor system, though true parallelism is not achieved, a process scheduling algorithm is applied and the processor is scheduled to execute each process one at a time yielding an illusion of concurrency.
  • Example: Executing multiple instances of the ‘Calculator’ program. Each of the instances are termed as a process

 

  • Thread
    A thread is the entity within a process that can be scheduled for execution. All threads of a process share its virtual address space and system resources. In addition, each thread maintains exception handlers, a scheduling priority, thread local storage, a unique thread identifier, and a set of structures the system will use to save the thread context until it is scheduled. The thread context includes the thread’s set of machine registers, the kernel stack, a thread environment block, and a user stack in the address space of the thread’s process. Threads can also have their own security context, which can be used for impersonating clients. A thread is a subset of the process.
  • It is termed as a ‘lightweight process’, since it is similar to a real process but executes within the context of a process and shares the same resources allotted to the process by the kernel (See kquest.co.cc/2010/03/operating-system for more info on the term ‘kernel’).
  • Usually, a process has only one thread of control – one set of machine instructions executing at a time.
  • A process may also be made up of multiple threads of execution that execute instructions concurrently.
  • Multiple threads of control can exploit the true parallelism possible on multiprocessor systems.
  • On a uni-processor system, a thread scheduling algorithm is applied and the processor is scheduled to run each thread one at a time.
  • All the threads running within a process share the same address space, file descriptor, stack and other process related attributes.
  • Since the threads of a process share the same memory, synchronizing the access to the shared data withing the process gains unprecedented importance.

The major difference between threads and processes is:

  1. Threads share the address space of the process that created it; processes have their own address space.
  2. Threads have direct access to the data segment of its process; processes have their own copy of the data segment of the parent process.
  3. Threads can directly communicate with other threads of its process; processes must use interprocess communication to communicate with sibling processes.
  4. Threads have almost no overhead; processes have considerable overhead.
  5. New threads are easily created; new processes require duplication of the parent process.
  6. Threads can exercise considerable control over threads of the same process; processes can only exercise control over child processes.
  7. Changes to the main thread (cancellation, priority change, etc.) may affect the behavior of the other threads of the process; changes to the parent process does not affect child processes.

 

 

http://msdn.microsoft.com/en-us/library/ms681917

 

 

Interrupts

Interrupts

An interrupt is a signal to the operating system that an event has occurred, and it results in changes in the sequence of instructions that is executed by the CPU. In the case of a hardware interrupt, the signal originates from a hardware device such as a keyboard (e.g., when a user presses a key), mouse or system clock (a circuit that generates pulses at precise intervals that are used to coordinate the computer’s activities). A software interrupt is an interrupt that originates in software, usually by a program in user mode.

Kernel Mode vs User Mode

 

http://stackoverflow.com/questions/16707098/node-js-kernel-mode-threading

http://www.linfo.org/kernel_mode.html

Kernel mode, also referred to as system mode

A system call is a request to the kernel in a Unix-like operating system by an active process for a service performed by the kernel. A process is an executing instance of a program. An active process is a process that is currently advancing in the CPU (while other processes are waiting in memory for their turns to use the CPU). Input/output (I/O) is any program, operation or device that transfers data to or from the CPU and to or from a peripheral device (such as disk drives, keyboards, mice and printers).

The Linux kernel Version 2.6 (which was introduced in late 2003) is preemptive. That is, a process running in kernel mode can be suspended in order to run a different process. This can be an important benefit for real time applications (i.e., systems which must respond to external events nearly simultaneously).

 

Unix-like kernels are also reentrant, which means that several processes can be in kernel mode simultaneously. However, on a single-processor system, only one process, regardless of its mode, will be progressing in the CPU at any point in time, and the others will be temporarily blocked until their turns.

http://www.linfo.org/user_mode.html

 

Mode switching does not always involve context switching

 

CISC (INTEL) and RISC (PowerPC)

CISC (INTEL) and RISC (PowerPC)

 

http://en.wikipedia.org/wiki/Complex_instruction_set_computing

complex instruction set computer (CISC /ˈsɪsk/) is a computer where single instructions can execute several low-level operations (such as a load from memory, an arithmetic operation, and a memory store) and/or are capable of multi-step operations or addressing modes within single instructions. The term was retroactively coined in contrast to reduced instruction set computer (RISC).[1][2]

Examples of CISC instruction set architectures are System/360 through z/ArchitecturePDP-11VAXMotorola 68k, and x86.

RISC: from cell phones to supercomputers[edit]

RISC architectures are now used across a wide range of platforms, from cellular telephones and tablet computers to some of the world’s fastest supercomputers such as the K computer, the fastest on the TOP500 list in 2011.[2][3]

By the beginning of the 21st century, the majority of low end and mobile systems relied on RISC architectures.[27]Examples include:

High end RISC and supercomputing[edit]

 

OSI Model

OSI Model

The Open Systems Interconnection (OSI) model (ISO/IEC 7498-1) is a conceptual model that characterizes and standardizes the internal functions of a communication system by partitioning it into abstraction layers. The model is a product of the Open Systems Interconnection project at the International Organization for Standardization (ISO).

The model groups similar communication functions into one of seven logical layers. A layer serves the layer above it and is served by the layer below it. For example, a layer that provides error-free communications across a network provides the path needed by applications above it, while it calls the next lower layer to send and receive packets that make up the contents of that path. Two instances at one layer are connected by a horizontal connection on that layer.

OSI Model    
Data unit Layer Function  Description  
Host
layers
Data 7. Application Network process to application The application layer is the OSI layer closest to the end user, which means that both the OSI application layer and the user interact directly with the software application. This layer interacts with software applications that implement a communicating component. Such application programs fall outside the scope of the OSI model. Application-layer functions typically include identifying communication partners, determining resource availability, and synchronizing communication. When identifying communication partners, the application layer determines the identity and availability of communication partners for an application with data to transmit. When determining resource availability, the application layer must decide whether sufficient network or the requested communication exists. In synchronizing communication, all communication between applications requires cooperation that is managed by the application layer.

Some examples of application-layer implementations also include:

On TCP/IP stack:

 
6. Presentation Data representation, encryption and decryption, convert machine dependent data to machine independent data The presentation layer establishes context between application-layer entities, in which the higher-layer entities may use different syntax and semantics if the presentation service provides a mapping between them. If a mapping is available, presentation service data units are encapsulated into session protocol data units, and passed down the stack.  
5. Session Interhost communication, managing sessions between applications The session layer controls the dialogues (connections) between computers. It establishes, manages and terminates the connections between the local and remote application. It provides for full-duplexhalf-duplex, or simplex operation, and establishes checkpointing, adjournment, termination, and restart procedures. The OSI model made this layer responsible for graceful close of sessions, which is a property of the Transmission Control Protocol, and also for session checkpointing and recovery, which is not usually used in the Internet Protocol Suite. The session layer is commonly implemented explicitly in application environments that use remote procedure calls.  
Segments 4. Transport Reliable delivery of packets between points on a network. The transport layer provides the reliable sending of data packets between nodes (with addresses) located on a network, providing reliable data transfer services to the upper layers.

An example of a transport layer protocol in the standard Internet protocol stack is TCP, usually built on top of the IP protocol.

Although not developed under the OSI Reference Model and not strictly conforming to the OSI definition of the transport layer, the Transmission Control Protocol (TCP) and the User Datagram Protocol (UDP) of the Internet Protocol Suite are commonly categorized as layer-4 protocols within OSI.

Transmission Control Protocol is a connection-oriented protocol, which means that it requires handshaking to set up end-to-end communications. Once a connection is set up user data may be sent bi-directionally over the connection.

UDP is a simpler message-based connectionless protocol. Connectionless protocols do not set up a dedicated end-to-end connection. Communication is achieved by transmitting information in one direction from source to destination without verifying the readiness or state of the receiver. However, one primary benefit of UDP over TCP is the application to voice over internet protocol (VoIP) where latency and jitter are the primary concerns. It is assumed in VoIP UDP that the end users provide any necessary real time confirmation that the message has been received.

 
Media
layers
Packet/Datagram 3. Network Addressing, routing and (not necessarily reliable) delivery of datagrams between points on a network. The network layer provides the functional and procedural means of transferring variable length data sequences (called datagrams) from one node to another connected to the same network. A network is a medium to which many nodes can be connected, on which every node has anaddress and which permits nodes connected to it to transfer messages to other nodes connected to it by merely providing the content of a message and the address of the destination node and letting the network find the way to deliver (“route”) the message to the destination node. In addition to message routing, the network may (or may not) implement message delivery by splitting the message into several fragments, delivering each fragment by a separate route and reassembling the fragments, report delivery errors, etc.  
Bit/Frame 2. Data link A reliable direct point-to-point data connection. The data link layer provides a reliable link between two directly connected nodes

Point-to-Point Protocol (PPP)

 
Bit 1. Physical A (not necessarily reliable) direct point-to-point data connection. it defines the electrical and physical specifications of the data connection

hubsrepeatersnetwork adapters

 

 

Comparison with TCP/IP model

In the TCP/IP model of the Internet, protocols are deliberately not as rigidly designed into strict layers as in the OSI model.[11] RFC 3439 contains a section entitled “Layering considered harmful“.[12] However, TCP/IP does recognize four broad layers of functionality which are derived from the operating scope of their contained protocols: the scope of the software application; the end-to-end transport connection; the internetworking range; and the scope of the direct links to other nodes on the local network.[citation needed]

Even though the concept is different from the OSI model, these layers are nevertheless often compared with the OSI layering scheme in the following way:

  • The Internet application layer includes the OSI application layer, presentation layer, and most of the session layer.
  • Its end-to-end transport layer includes the graceful close function of the OSI session layer as well as the OSI transport layer.
  • The internetworking layer (Internet layer) is a subset of the OSI network layer (see above)
  • The link layer includes the OSI data link and physical layers, as well as parts of OSI’s network layer