Wednesday, December 30, 2009
Mini Usb To Rca Radio Shack
For now, Heritage / 1 does not provide this hardware support for Virtual Memory. I do not think it will be difficult to implement in a second step (after the machine is "In use") and make it so that it fits into the architecture, without violating.
Why Virtual Memory? "Because, contrary to what I imagined at first, Virtual Memory is not a" luxury ", on the contrary, its lack is a pretty serious limitation when you think of an operating system itself. It will take some time before me entangled in these matters but the time will come, and I will be trying to carry a "UNIX-like" as Minix OS or NetBSD, I do not think homework is viable (or even possible) on a machine without support Virtual Memory.
When I looked at these issues for the first time (just six months ago) I felt that the role of Virtual Memory is more in the protection of processes and the establishment of a linear model of memory for applications with which the OS is committed to more memory (virtual) of the physically available (thanks to " swapping "). Maybe that's why it did not seem particularly attractive in a design that boasts simple. Now, however, I am seeing the benefits of Virtual Memory in another sense, and it has to do with the burden of program memory.
The OS (or rather a component of his called "loader") is responsible for loading each program storage memory and when it does, you need to "relocate", that is, change (the fly) the flow control statements (such as JP and CALL) in conjunction with the base address assigned to the program.
Actually I planned to avoid the need for "relocation" using relative addressing instructions (which I still think a good idea), but even using direct addressing instructions, a loader can recalculate all the jumps without too much difficulty (as I think) but it consumed some time, of course.
elegant solution, however, would carry the burden of the program in the domain of linear memory, provided by Virtual Memory, for which the duty to provide the proper hardware support. In If so, each program would have its own virtual memory address which covers the entire addressable space (64K), the hardware is responsible for mapping the addresses of each application to real space physical memory installed, the OS would be responsible for maintain the mapping and in addition, implement process protection rules. And the "loader" is limited to map the new program within the Virtual Memory system.
But as I said earlier, that same support will be implemented in a second step, once Heritage / 1 is "in use".
Monday, December 28, 2009
Make A Bmx Bike Online
Eight-bit Virtual Memory versus sixteen
NOTE: Heritage / 1 presents an architecture (strictly) 16 bits and I have no intentions of changing it. What follows is just a reflection on the sidelines.
My "love at first sight" for the 16 bit part of: (1) the symmetry between the size of the address and data in a machine likely to be built and (2) the fact that this is the traditional size ( and comfortable) to an integer.
Perhaps in the context of a typical whole numbers do not stand out as protagonists of the action, a role that is reserved for the rational numbers (floating point). However, under the carpet, both the OS and the application itself handle a large number of structures in memory (eg arrays) whose Access is based on rates that can not only be integers. And while the machine addressable space limited to 2 ** 16 locations (and therefore any structure in memory), an unsigned 16-bit is more than reasonable.
The problem starts when I begin to consider other tasks such as handling of characters and an arithmetic decimal (BCD).
Characters (ASCII) are 8 bits, so that a fence in memory of 16 locations is not exactly the "more natural". For its part, BCD arithmetic (as opposed to a floating point) I find it very attractive for its simplicity and greater assurance of accuracy, but the numbers represent her by "nibbles" (4 bits), not "words" (16 bits).
Right now I'm making adjustments in my ALU to ensure the necessary support for this work is definitely settled by Software! However, I wonder (and that is what this discussion) if 8-bit architecture would have addressed these problems more naturally and efficiently.
In fact, I always saw Heritage / 1 as a general purpose minicomputer that is, I never sat down to imagine a specific use for it. If that is the case, I have focused on a design "optimized" for that use.
Say, for example, that the machine is designed to handle a accounting system. This involves a large number of users, reporting and handling of numbers with two or three decimal digits of precision. Thus, the emphasis should be on the peripherals (terminals, printers, storage), management of text and a simple rational arithmetic as BCD.
peripherals for teaching has taken IBM/360 "channels" (additional minicomputers optimized for input / output). Although I think my solution (peripheral buffer and "smart") not far from it.
for intensive work with characters had opted for a bus 8-bit data, a memory and records organized in Bytes also 8-bit internal. Here I had lost the symmetry direction-data; surely have made the traditional "index registers" 16-bit indirect address to which my computer would prove a "Intel 8080" giant. Perhaps
operations had been conceived in my 16-bit ALU for that whole manage efficiently, for example, memory access arrangements. In this way could provide efficient instructions for both integer and characters.
For BCD arithmetic, the hardware had been implemented. 8-bit architecture would have saved me almost half the space and the cost of the circuits, so it would plenty of room to build a strong ALU, even able to multiply and divide.
The challenge had been to codify instructions on only 8 bits. At Heritage / 1 all instructions are encoded in a single word and the operand (if any) is simpre of a word.
In an 8-bit architecture is still necessary to read 16-bit operands (for example, the address in a direct address instruction) which takes two machine cycles instead of one. An alternative solution is to access memory in terms of bytes and words, ie, wired to 16-bit CPU but allowing the access a word or a byte at a time as instructed. Is a solution inelegant, but more efficient.
short, this "machine optimized for accounting system (which does not reduce generality) would be very different Heritage / 1. Its design would not be very elegant but surely address their work more efficiently.
Following this consideration, the challenge is to get my Heritage / 1 characters get to work comfortably and BCD arithmetic implemented by software, but if it means breaking the current architecture ... [I think].
NOTE: Heritage / 1 presents an architecture (strictly) 16 bits and I have no intentions of changing it. What follows is just a reflection on the sidelines.
My "love at first sight" for the 16 bit part of: (1) the symmetry between the size of the address and data in a machine likely to be built and (2) the fact that this is the traditional size ( and comfortable) to an integer.
Perhaps in the context of a typical whole numbers do not stand out as protagonists of the action, a role that is reserved for the rational numbers (floating point). However, under the carpet, both the OS and the application itself handle a large number of structures in memory (eg arrays) whose Access is based on rates that can not only be integers. And while the machine addressable space limited to 2 ** 16 locations (and therefore any structure in memory), an unsigned 16-bit is more than reasonable.
The problem starts when I begin to consider other tasks such as handling of characters and an arithmetic decimal (BCD).
Characters (ASCII) are 8 bits, so that a fence in memory of 16 locations is not exactly the "more natural". For its part, BCD arithmetic (as opposed to a floating point) I find it very attractive for its simplicity and greater assurance of accuracy, but the numbers represent her by "nibbles" (4 bits), not "words" (16 bits).
Right now I'm making adjustments in my ALU to ensure the necessary support for this work is definitely settled by Software! However, I wonder (and that is what this discussion) if 8-bit architecture would have addressed these problems more naturally and efficiently.
In fact, I always saw Heritage / 1 as a general purpose minicomputer that is, I never sat down to imagine a specific use for it. If that is the case, I have focused on a design "optimized" for that use.
Say, for example, that the machine is designed to handle a accounting system. This involves a large number of users, reporting and handling of numbers with two or three decimal digits of precision. Thus, the emphasis should be on the peripherals (terminals, printers, storage), management of text and a simple rational arithmetic as BCD.
peripherals for teaching has taken IBM/360 "channels" (additional minicomputers optimized for input / output). Although I think my solution (peripheral buffer and "smart") not far from it.
for intensive work with characters had opted for a bus 8-bit data, a memory and records organized in Bytes also 8-bit internal. Here I had lost the symmetry direction-data; surely have made the traditional "index registers" 16-bit indirect address to which my computer would prove a "Intel 8080" giant. Perhaps
operations had been conceived in my 16-bit ALU for that whole manage efficiently, for example, memory access arrangements. In this way could provide efficient instructions for both integer and characters.
For BCD arithmetic, the hardware had been implemented. 8-bit architecture would have saved me almost half the space and the cost of the circuits, so it would plenty of room to build a strong ALU, even able to multiply and divide.
The challenge had been to codify instructions on only 8 bits. At Heritage / 1 all instructions are encoded in a single word and the operand (if any) is simpre of a word.
In an 8-bit architecture is still necessary to read 16-bit operands (for example, the address in a direct address instruction) which takes two machine cycles instead of one. An alternative solution is to access memory in terms of bytes and words, ie, wired to 16-bit CPU but allowing the access a word or a byte at a time as instructed. Is a solution inelegant, but more efficient.
short, this "machine optimized for accounting system (which does not reduce generality) would be very different Heritage / 1. Its design would not be very elegant but surely address their work more efficiently.
Following this consideration, the challenge is to get my Heritage / 1 characters get to work comfortably and BCD arithmetic implemented by software, but if it means breaking the current architecture ... [I think].
Thursday, December 24, 2009
Can You Reserve Flights
How useful is Heritage / 1 in practice? ALU-accumulator
I said that Heritage / 1 wanted to be a computer "old" but "useful." Without emgargo, and even if the hardware were to work reliably, Heritage/1presenta serious limitations even within the framework of "his age" (1966-1972).
The first limitation is the purely personal (single taxpayers) of this project, which limits the growing of software: even if I try I can not get far developing it all, from the operating system to applications, to hit assembly and without the slightest support previous code. Carrying
Open Source code is not possible at this stage since the architecture of Heritage / 1 is too early to existing Open Source code, for example, Heritage / 1 does not support Virtual Memory (which is that can be repaired in the future by building an MMU to achieve inserted into the existing architecture.) An alternative would be to port code "ancestral", but this is a technical rather than archaeological work for other hard in both directions.
Consequently, I can not make utilitarianism a central objective of the project. Rather, utilitarianism is a guide for experimentation in the field of Software. My applications are hopelessly primitive, but must (of course) be designed to work "compelling" as, for example, the database management. The ultimate goal is not to use the machine to solve day-to-day but experience with application development "real life" within the historical context of Heritage / 1 (1966-1972).
the only way to get away (as well as the minis came in the 70s) would be collaboration between hundreds of programmers over the Internet, ie converting Heritage / 1 on a collaborative project. But for that H1 should be sufficiently attractive and do not think that's the case.
The initial question can not therefore be answered with too much optimism. The machine is designed to perform useful work, but in practice will be as useful as my patience will allow.
I said that Heritage / 1 wanted to be a computer "old" but "useful." Without emgargo, and even if the hardware were to work reliably, Heritage/1presenta serious limitations even within the framework of "his age" (1966-1972).
The first limitation is the purely personal (single taxpayers) of this project, which limits the growing of software: even if I try I can not get far developing it all, from the operating system to applications, to hit assembly and without the slightest support previous code. Carrying
Open Source code is not possible at this stage since the architecture of Heritage / 1 is too early to existing Open Source code, for example, Heritage / 1 does not support Virtual Memory (which is that can be repaired in the future by building an MMU to achieve inserted into the existing architecture.) An alternative would be to port code "ancestral", but this is a technical rather than archaeological work for other hard in both directions.
Consequently, I can not make utilitarianism a central objective of the project. Rather, utilitarianism is a guide for experimentation in the field of Software. My applications are hopelessly primitive, but must (of course) be designed to work "compelling" as, for example, the database management. The ultimate goal is not to use the machine to solve day-to-day but experience with application development "real life" within the historical context of Heritage / 1 (1966-1972).
the only way to get away (as well as the minis came in the 70s) would be collaboration between hundreds of programmers over the Internet, ie converting Heritage / 1 on a collaborative project. But for that H1 should be sufficiently attractive and do not think that's the case.
The initial question can not therefore be answered with too much optimism. The machine is designed to perform useful work, but in practice will be as useful as my patience will allow.
Saturday, December 19, 2009
Parts Of A Aircraft Propeller
Heritage / 1 is (or intends to be) a computer "classic", and as such uses a "cumulative" (record) that works with a typical Arithmetic Logic Unit (ALU). The latter consists of 16 purely combinational circuits whose outputs feed an internal bus called Bus Result (R-BUS).
Each of these combinational circuits has two 16-bit inputs and one output also 16 bits. Tickets are continuously fed from the internal CPU Data Bus (D-BUS) and the output of the accumulator, respectively. The output connects to the R-BUS via a third buffer state, in particular, the first of these circuits do not pass over the D-BUS to combine their output without the contents of the accu, which allows to use A as a regular registration.
logical and arithmetic instructions generate control signals for the circuit for the ALU (for example, the Addendum) opened its R-BUS output while the others remain in third state, after which it commands the tank to store the result (R-BUS).
The following block diagram of the CPU, you can see which is the interconnection between the accumulator (A) and the ALU.
Frankly, this arrangement was the first thing that came to mind, after designed, however, I have seen is not as common nor commercial computers or on personal projects like Heritage / 1. Usually (as I have seen) is to place the accumulator "before" the ALU, allowing the result of the arithmetic logical operation calculated by the same go directly to the data bus.
The following block diagram of the Intel 8085 microprocessor can be seen that type of arrangement.
The advantage is obvious: the result of a logical, arithmetic can be any record of the CPU (not just in A) which avoids having to move a step further. However, there is a price to pay: the need for a temporary register feeding the other input ALU, as shown in the figure above.
Designing Heritage / 1 has among its premises a minimum use of temporary registers and do not use either. This is in the mood to save clock cycles in the execution of the instructions, so as to simplify the circuits in question. So use a temporary register for the ALU would mean a breach of this premise.
The price to pay is the inability to save the result of an arithmetic logical operation in another record that is not A, as illustrated in the following code segment:
; Add contents of B and D
; depositing the result in C
mov a, b
add a, d
mov c, a
, with a different architecture, had taken
, a single instruction:
, add b, d, e
recognize it is a limitation imposed on the programmer - I'll be myself after all! - but assume for the sake of preserving a simple hardware design where the use of temporary registers is a kind of heresy.
Sunday, December 13, 2009
Is Bio Oil Good For Wax Rash?
Interconnect Controller micro-programmed to H1-OS
I always thought my peripherals have to be smart but not based on microcontroller (LSI) but in discrete logic such as Heritage / 1. There conceived, then, the idea of \u200b\u200bbuilding a "generic controller" based on simple FSM whose state tables would be stored in an array of diodes. Today
finally got ready to develop the idea and, surprisingly, the development led me to understand the meaning of "stored program machine." I did not get to a particular design but my driver, a satisfying intimate with the concept, which try to explain in what follows.
A MACHINE WITH PROGRAM STORED
The first time I read a computer manual legendary something like this: "This is a sequential, stored-program, digital parallel computer", I wondered if it is possible to construct a computer that is not all that, because a parallel sequential machine stored-program is the de facto structure of every computer from the time before my birth.
was not always so. The first-generation computers (1940s and early 50s) were serious, that is, the data inside is processed bit by bit. Ramington and Rand and IBM had established their respective markets (with this type of equipment) where he built a CRAY supercomputer whose power was amazing, among other factors, the fact of processing data in parallel ... as we do today.
What's "stored program", however, was established early. Even before the ENIAC manufactured in the United States the Colossus in England, Alan Turing had already captured the concept in theory and had introduced in his model of "universal machine." Later, Von Neuman EDVAC built its U.S. based on this principle. However, by that same date were built machines where the program and relapse data in separate reports, the most notable one that IBM built for Univercidad Harvard in 1944, based on relays. Today often speak of "Von Neuman" versus "Harvard" to refer to a given computer architecture. The first has dominated for decades but in this century we are breaking that pattern to move quickly toward concurrent processing.
VISTA COMPUTER AS A FINITE STATE MACHINE (FSM)
Imagine a Finite State Machine (FSM) whose state table is stored in a memory. The technology used (RAM, diode array, etc.) is not important now, the important thing is to imagine how it is structured the data it contains and how the WSF will use it.
The purpose of the machine is to use the WSF to assemble sequences. Each memory location is a step sequence. For each location, each bit represents an output signal. Most departures are "control signals" of the machine a few output bits representing the next state of the FSM, ie, the "address" the next step.
This arrangement would not get much further if there was no need to make decisions based on input data. What makes a computer, anyway? Running sequences ... and nothing else! The greatness of such sequences are not fixed but take different paths depending on the input data and intermediate results. And great is the fact that much of these "facts" are used to route the FSM, ie, are "instructions", the program itself: Software. That
is precisely the essence of the concept of "program track history": the program itself is an input element, as well as data, and based on them, it generates one or the other sequence, hence the need for storage with data in the same medium.
Another key concept in which I had not noticed until today: the decision making process (the next state of FSM) also depends Term Of established much earlier by intermediate results. This is the case of a negative number after a short, for example. In computers, this is usually noted in the "flag register (Register Flags.)
back to my FSM, let's split the state table (For better organization) as "sub-sequences" of 4 steps each. This has implications for addressing the table: the N bits of address, will feed the 2 less significant from a 2-bit counter whose input is fed, in turn, by a pulse train (clock). The N-2 bits used to address other subsequences.
I said before, for each location, a few bits of output represent the "next step" in the sequence. Say you are M bits, and connect them to address not only subsequences sequences, so that our FSM can contain up to 2 ^ M subsequences.
Returning to the comparison of my FSM with a computer, the "subsequences" can be seen as "instructions." The difference is that these are not in the same memory where data recycle but in separate memory, in a computer "microprogrammed" (as is common from IBM/360 to this day), this would be "micro-code" the machine and its memory call "micro-code storage."
But that's not all, in fact, the machine we have so far still does not work because it is not able to make decisions based on the input data (real data who reside in main memory).
For this to happen, we just have to add the outputs of which involved Flags Register in addressing the state table. We have already taken the first two bits for the subsequence, now take the following 4 and food from the Flags Register (assuming this is 4 bits).
The machine will, of course, records, and means for processing data (eg an ALU). The Flags tickets come from these circuits, ie, the flags are set or removed depending on intermediate results, as intended. MY PROPOSAL
My proposal consite in that with such little means, you can build a computing machine where the software resides "as data" in the same memory where data true, but as states of the FSM table in separate memory. In other words, the "software" of the machine would direct its "micro-code."
A machine that's capable of making decisions based on interim results due to the Flags Register and can, in principle, perform tasks as complicated as possible to "program" in a state table that is, in turn, likely to be built.
My intention, however, is not theoretical but practical. The work to be done by this machine is assumed to be simple enough to be programmed directly as "micro-code."
intend to take this technique to provide some (limited) Device Controllers intelegencia the Heritage / 1. These would be printed circuit boards housed in the control unit of each peripheral and mission consitiría to read / write a buffer from the media in question (tape, serial port, printer, etc.) as well as negotiate the attention of the CPU in accordance with interrupt architecture Heritage / 1. Being
I always thought my peripherals have to be smart but not based on microcontroller (LSI) but in discrete logic such as Heritage / 1. There conceived, then, the idea of \u200b\u200bbuilding a "generic controller" based on simple FSM whose state tables would be stored in an array of diodes. Today
finally got ready to develop the idea and, surprisingly, the development led me to understand the meaning of "stored program machine." I did not get to a particular design but my driver, a satisfying intimate with the concept, which try to explain in what follows.
A MACHINE WITH PROGRAM STORED
The first time I read a computer manual legendary something like this: "This is a sequential, stored-program, digital parallel computer", I wondered if it is possible to construct a computer that is not all that, because a parallel sequential machine stored-program is the de facto structure of every computer from the time before my birth.
was not always so. The first-generation computers (1940s and early 50s) were serious, that is, the data inside is processed bit by bit. Ramington and Rand and IBM had established their respective markets (with this type of equipment) where he built a CRAY supercomputer whose power was amazing, among other factors, the fact of processing data in parallel ... as we do today.
What's "stored program", however, was established early. Even before the ENIAC manufactured in the United States the Colossus in England, Alan Turing had already captured the concept in theory and had introduced in his model of "universal machine." Later, Von Neuman EDVAC built its U.S. based on this principle. However, by that same date were built machines where the program and relapse data in separate reports, the most notable one that IBM built for Univercidad Harvard in 1944, based on relays. Today often speak of "Von Neuman" versus "Harvard" to refer to a given computer architecture. The first has dominated for decades but in this century we are breaking that pattern to move quickly toward concurrent processing.
VISTA COMPUTER AS A FINITE STATE MACHINE (FSM)
Imagine a Finite State Machine (FSM) whose state table is stored in a memory. The technology used (RAM, diode array, etc.) is not important now, the important thing is to imagine how it is structured the data it contains and how the WSF will use it.
The purpose of the machine is to use the WSF to assemble sequences. Each memory location is a step sequence. For each location, each bit represents an output signal. Most departures are "control signals" of the machine a few output bits representing the next state of the FSM, ie, the "address" the next step.
This arrangement would not get much further if there was no need to make decisions based on input data. What makes a computer, anyway? Running sequences ... and nothing else! The greatness of such sequences are not fixed but take different paths depending on the input data and intermediate results. And great is the fact that much of these "facts" are used to route the FSM, ie, are "instructions", the program itself: Software. That
is precisely the essence of the concept of "program track history": the program itself is an input element, as well as data, and based on them, it generates one or the other sequence, hence the need for storage with data in the same medium.
Another key concept in which I had not noticed until today: the decision making process (the next state of FSM) also depends Term Of established much earlier by intermediate results. This is the case of a negative number after a short, for example. In computers, this is usually noted in the "flag register (Register Flags.)
back to my FSM, let's split the state table (For better organization) as "sub-sequences" of 4 steps each. This has implications for addressing the table: the N bits of address, will feed the 2 less significant from a 2-bit counter whose input is fed, in turn, by a pulse train (clock). The N-2 bits used to address other subsequences.
I said before, for each location, a few bits of output represent the "next step" in the sequence. Say you are M bits, and connect them to address not only subsequences sequences, so that our FSM can contain up to 2 ^ M subsequences.
Returning to the comparison of my FSM with a computer, the "subsequences" can be seen as "instructions." The difference is that these are not in the same memory where data recycle but in separate memory, in a computer "microprogrammed" (as is common from IBM/360 to this day), this would be "micro-code" the machine and its memory call "micro-code storage."
But that's not all, in fact, the machine we have so far still does not work because it is not able to make decisions based on the input data (real data who reside in main memory).
For this to happen, we just have to add the outputs of which involved Flags Register in addressing the state table. We have already taken the first two bits for the subsequence, now take the following 4 and food from the Flags Register (assuming this is 4 bits).
The machine will, of course, records, and means for processing data (eg an ALU). The Flags tickets come from these circuits, ie, the flags are set or removed depending on intermediate results, as intended. MY PROPOSAL
My proposal consite in that with such little means, you can build a computing machine where the software resides "as data" in the same memory where data true, but as states of the FSM table in separate memory. In other words, the "software" of the machine would direct its "micro-code."
A machine that's capable of making decisions based on interim results due to the Flags Register and can, in principle, perform tasks as complicated as possible to "program" in a state table that is, in turn, likely to be built.
My intention, however, is not theoretical but practical. The work to be done by this machine is assumed to be simple enough to be programmed directly as "micro-code."
intend to take this technique to provide some (limited) Device Controllers intelegencia the Heritage / 1. These would be printed circuit boards housed in the control unit of each peripheral and mission consitiría to read / write a buffer from the media in question (tape, serial port, printer, etc.) as well as negotiate the attention of the CPU in accordance with interrupt architecture Heritage / 1. Being
Wednesday, December 9, 2009
Does Structure Chloroplast Enable
Peripherals: Multi-Use Storage
Heritage / 1 a sequential machine, operating system (H1-OS, which OS will simply call in this article) will implement multiprocessing with "slices of time", as shown in the figure below.
In the figure, P1 and P2 are two processes "Multiprocessor", ie, multiplexed in time in such a way that P1 and P2 are simultaneously in the eyes of the user, OS is the operating system or, more specifically, a routine responsible for producing the switching between processes. Each process runs continuously until the arrival of a TIMER interrupt (every 1 ms) which passes control to the OS.
We have seen in previous articles that a program can only exist as such in Storage, when loaded into memory, it creates an "instance" consists of two areas: one for code and other data for your area (Stack). We also saw that same program can be multi-instantiated, in which case it keeps in memory a single copy the code, but Stacks are created separately, one for each instance. By "process" we mean just that: an instance of a given program, in particular, P1 and P2 may be instances of the same program.
STATE OF EXECUTION
When a process is interrupted (the Timer), its "status of implementation" must be stored somewhere so that when the OS returns control, it can continue the exact ejecutación point where you left off.
At Heritage / 1, the interrupt mechanism itself provides the means to save the "state of execution." Indeed, when an interrupt occurs, the Program Counter (PC) and flags (F) are automatically saved in the Stack. But we saw that each process has its own stack so that at any given time there are several, one for each process in multiprocessing. However, for the interrupt mechanism, "Stack" is that the log structure pointed to by SP (Stack Pointer), and at the time of the interruption Stack SP is pointing to the interrupted program so that is where save (automatically ) PC and F.
Moral: The status of implementation of a given process is saved in their stack.
Disruption leads to OS, the first task is to save the state complete the process execution interrupted that is, save the rest of the records in their own Stack. At baseline, SP still points to the stack of the process, so the OS complete with successive saves PUSH instructions. Finally, the OS saves SP to a data structure itself.
OS The second task is to pass control to the next process in turn. It is sufficient to find out what that process and retrieve the corresponding Stack Pointer (ie, load SP with that value). Once done, Stack SP is pointing to the process in question where, as has been said, is saving his "state of execution." The OS is responsible for executing successive instructions then POP to retrieve the records he saved at the time, after which a RET instruction executed on the master (interrupt return). This statement automatically takes care of recovering PC, F, and pass control to the process in question, which completes the switching.
DATA STRUCTURE
To conduct its work in arbitration, the OS uses its own data structures. The following is only a first approximation as the structures as such have not been designed. PGM
In an arrangement (call PGM) is stored information about each program available. Remember to recycle programs in Storage, not in memory. To make the OS aware of its existence, is needed a configuration file where such programs are enrolled. The OS reads this file during loading and creates from it, the PGM which specify under what programs are available and where to find them in Storage. PROC
In an arrangement (call PROC) is stored information about the processes involved multiprocessing. Initially, the array will be empty or may contain information about operating system processes automatically launched (as Shell). Each array element specifies, among other things, the address in memory of the Code and the Stack of the process in question.
Whenever a process instance, creates a new element in the array PROC. If this is the first instance, the corresponding program is loaded from storage. The OS switching mechanism is guided by this arrangement for arbitration. SESSION
Assuming that the system can be accessed from several terminals at once, the concept of session is a "working day" from the perspective of the visitor. However, the session is not a concept inherent in multiprocessing. Talk about it later in an article devoted to "Multi-User."
OPERATING SYSTEM SERVICES
Apart from the arbitration rutuna multiprocessing we just saw, the OS offers other services, especially for applications such as access to perifiéricos, accommodation etc memory.
applications requesting these services through software interrupts (INT instruction), which we call "calls to the OS" or (in English) "System Calls".
When an application makes a System Call, the CPU passes control to the specified interrupt routine, which is part of the OS. The first thing this routine is to disable the timer interrupt to avoid being disturbed during their work. And this is important: an OS interrupt routine stops multiprocessing, that is, it paralyzes the system to concentrate entirely on the task of the OS.
These routines must therefore be short and concise. Their duties are such as memory allocation, etc peripheral initiation. Once he finished his brief work, the routine passes control to the heart of the OS who will take care then move to the next process in turn.
But the OS also offers FULL SUPPORT "long" that can not be written as short interrupt routines. Such is the case, for example, data processing routines such as: Management, búqueda, or perhaps a floating point arithmetic. These services are implemented as software operating system (to which we call services) and makes them participate in such processes multiprocessing ordinary. To make use of them, an application makes a system routine calls interrumpción which merely provides the bridge between the application and service.
RELATIONS RELATIONSHIP BETWEEN PROCESSES
As a process can request a service from the OS (which is, as we saw, another process), a process can also start another process not covered by the operating system. This applies, for example, run-time library "of language (on this particular issue back on other items).
Suppose there is a service to sort lists, call it QUICK-SORT. To use, a process must make a system call passing as argument the address of the list in question as well as its size. Surely, the process is not interested in further execution until QUICK-SORT has completed its work and that nothing can be done with a half-order list.
Remember that a service is involved multiprocessing, so that their work takes place "simultaneously" with the process that called it. So most likely the OS returns control to the process before the QUICK-SORT has completed its work, contrary to what is wanted.
The way around this is to build relationships of kinship between processes. In this case, the process in question is "father" of the instance (process) QUICK-SORT that the OS created upon request. And the rule is that the father stops until the child has finished executing.
"Stop a process" in multiprocessing means that their turn, the OS will not control him but the next process in turn. This is achieved through the use of "traffic lights" which in this case, puden located in the same array PROC.
Another advantage of parent-child relationships between processes refers to cleaning and maintenance: When a parent process dies, all his children should die, preventing imnecesariamente occupy the spectrum of multiprocessing.
PROCESSING MULTI-WIRE
The rule set out in section above (the father if the child stops running) is not strict but just an option. It may happen that a process need not wait for the results of the requested service. For example, printing a document as I continue with other calculations related to printing.
In this case, the application would make the system call specifying that you want to be stopped. As a result, a process would be "simultaneously" with their children, which is nothing but "multi-thread (multithreading).
MULTI-USER does not directly involve multiprocessing
Multi-User. On this theme abound in separate article due to its extension.
In the figure, P1 and P2 are two processes "Multiprocessor", ie, multiplexed in time in such a way that P1 and P2 are simultaneously in the eyes of the user, OS is the operating system or, more specifically, a routine responsible for producing the switching between processes. Each process runs continuously until the arrival of a TIMER interrupt (every 1 ms) which passes control to the OS.
We have seen in previous articles that a program can only exist as such in Storage, when loaded into memory, it creates an "instance" consists of two areas: one for code and other data for your area (Stack). We also saw that same program can be multi-instantiated, in which case it keeps in memory a single copy the code, but Stacks are created separately, one for each instance. By "process" we mean just that: an instance of a given program, in particular, P1 and P2 may be instances of the same program.
STATE OF EXECUTION
When a process is interrupted (the Timer), its "status of implementation" must be stored somewhere so that when the OS returns control, it can continue the exact ejecutación point where you left off.
At Heritage / 1, the interrupt mechanism itself provides the means to save the "state of execution." Indeed, when an interrupt occurs, the Program Counter (PC) and flags (F) are automatically saved in the Stack. But we saw that each process has its own stack so that at any given time there are several, one for each process in multiprocessing. However, for the interrupt mechanism, "Stack" is that the log structure pointed to by SP (Stack Pointer), and at the time of the interruption Stack SP is pointing to the interrupted program so that is where save (automatically ) PC and F.
Moral: The status of implementation of a given process is saved in their stack.
Disruption leads to OS, the first task is to save the state complete the process execution interrupted that is, save the rest of the records in their own Stack. At baseline, SP still points to the stack of the process, so the OS complete with successive saves PUSH instructions. Finally, the OS saves SP to a data structure itself.
OS The second task is to pass control to the next process in turn. It is sufficient to find out what that process and retrieve the corresponding Stack Pointer (ie, load SP with that value). Once done, Stack SP is pointing to the process in question where, as has been said, is saving his "state of execution." The OS is responsible for executing successive instructions then POP to retrieve the records he saved at the time, after which a RET instruction executed on the master (interrupt return). This statement automatically takes care of recovering PC, F, and pass control to the process in question, which completes the switching.
DATA STRUCTURE
To conduct its work in arbitration, the OS uses its own data structures. The following is only a first approximation as the structures as such have not been designed. PGM
In an arrangement (call PGM) is stored information about each program available. Remember to recycle programs in Storage, not in memory. To make the OS aware of its existence, is needed a configuration file where such programs are enrolled. The OS reads this file during loading and creates from it, the PGM which specify under what programs are available and where to find them in Storage. PROC
In an arrangement (call PROC) is stored information about the processes involved multiprocessing. Initially, the array will be empty or may contain information about operating system processes automatically launched (as Shell). Each array element specifies, among other things, the address in memory of the Code and the Stack of the process in question.
Whenever a process instance, creates a new element in the array PROC. If this is the first instance, the corresponding program is loaded from storage. The OS switching mechanism is guided by this arrangement for arbitration. SESSION
Assuming that the system can be accessed from several terminals at once, the concept of session is a "working day" from the perspective of the visitor. However, the session is not a concept inherent in multiprocessing. Talk about it later in an article devoted to "Multi-User."
OPERATING SYSTEM SERVICES
Apart from the arbitration rutuna multiprocessing we just saw, the OS offers other services, especially for applications such as access to perifiéricos, accommodation etc memory.
applications requesting these services through software interrupts (INT instruction), which we call "calls to the OS" or (in English) "System Calls".
When an application makes a System Call, the CPU passes control to the specified interrupt routine, which is part of the OS. The first thing this routine is to disable the timer interrupt to avoid being disturbed during their work. And this is important: an OS interrupt routine stops multiprocessing, that is, it paralyzes the system to concentrate entirely on the task of the OS.
These routines must therefore be short and concise. Their duties are such as memory allocation, etc peripheral initiation. Once he finished his brief work, the routine passes control to the heart of the OS who will take care then move to the next process in turn.
But the OS also offers FULL SUPPORT "long" that can not be written as short interrupt routines. Such is the case, for example, data processing routines such as: Management, búqueda, or perhaps a floating point arithmetic. These services are implemented as software operating system (to which we call services) and makes them participate in such processes multiprocessing ordinary. To make use of them, an application makes a system routine calls interrumpción which merely provides the bridge between the application and service.
RELATIONS RELATIONSHIP BETWEEN PROCESSES
As a process can request a service from the OS (which is, as we saw, another process), a process can also start another process not covered by the operating system. This applies, for example, run-time library "of language (on this particular issue back on other items).
Suppose there is a service to sort lists, call it QUICK-SORT. To use, a process must make a system call passing as argument the address of the list in question as well as its size. Surely, the process is not interested in further execution until QUICK-SORT has completed its work and that nothing can be done with a half-order list.
Remember that a service is involved multiprocessing, so that their work takes place "simultaneously" with the process that called it. So most likely the OS returns control to the process before the QUICK-SORT has completed its work, contrary to what is wanted.
The way around this is to build relationships of kinship between processes. In this case, the process in question is "father" of the instance (process) QUICK-SORT that the OS created upon request. And the rule is that the father stops until the child has finished executing.
"Stop a process" in multiprocessing means that their turn, the OS will not control him but the next process in turn. This is achieved through the use of "traffic lights" which in this case, puden located in the same array PROC.
Another advantage of parent-child relationships between processes refers to cleaning and maintenance: When a parent process dies, all his children should die, preventing imnecesariamente occupy the spectrum of multiprocessing.
PROCESSING MULTI-WIRE
The rule set out in section above (the father if the child stops running) is not strict but just an option. It may happen that a process need not wait for the results of the requested service. For example, printing a document as I continue with other calculations related to printing.
In this case, the application would make the system call specifying that you want to be stopped. As a result, a process would be "simultaneously" with their children, which is nothing but "multi-thread (multithreading).
MULTI-USER does not directly involve multiprocessing
Multi-User. On this theme abound in separate article due to its extension.
Tuesday, December 8, 2009
Why Is The Bottom Of My Foot Numb
Processing in Heritage / 1
Before a hard drive, Heritage / 1 will be V-tapes for storage. I mentioned that a V-TAPE is (or will be) a 3.5-inch diskette with a file system (owner) designed to simulate a magnetic tape.
So far, I imagined that a V-TAPE only hosts to a file, but this is a waste of space, obviously it is quite possible, therefore, to design the file system so as to allow more than one file for V -TAPE. Suppose that is the case, then my choice will host one or more of a V-TAPE. I think in the case of databases (or data files in general) prefer a single file by V-TAPE and these files will grow over time, while for several files preferred programs by V-TAPE and that they are fixed length so I can make better use of disk space.
The reader will wonder what is problem if the diskette is a random access medium. The answer is that a V-TAPE is no such thing as its file system forces you to behave like a tape and is as such qie no-brainer. The tape is a sequential access medium, can personally do not but only inserts data at the end to add or replace the file in its entirety. From there to host several files on one tape impossible to update them. I am aware that these procedures look archaic in the ear of a modern man, but this is it: to savor the past problems.
DATABASES
will adevertido the curious reader of history that the old computer centers employing a large number of tape drives. I think not so much to increase the total capacity of the system but, rather, to allow operation with a large number of files simultaneously.
Take the case of an application of typical database. Today we are used to design databases in line with the relational model, we think of "stalemate" divided into "columns", we set vínclulos between them, and we worry about the "plumbing" underlying it as the employee DBMS (MySQL, MS SQL Server, etc) is responsible for those underage. The truth is that almost all modern DBMS creates a disk file for each "table" in our database, so an application, which typically employs a large number of tables, use a large number of files on disk.
We will not use disk, but V-tapes, and we have seen that each file uses a V-TAPE for themselves, so that a large number of files means a large number of V-TAPES, which is the same as saying: a large number of readers of diskettes. This imposes a serious constraint to the design of our databases, you must use the least amount of "Tables."
Incidentally, we will not use the Relational Model, but as this is the hierarchical role in times of Heritage / 1.
PERMANENT FILES AND FILES MOTION
Here is a concept lost in time because today, updating a file on disk is no problem. But this was not the case at the time of the tapes.
We have seen that the only way to update a tape that contains multiple files is to replace all their new versions, ie the tape in its entirety. In practice, this means, rather, read the old tape while creating a new one with the new data (Using two tape drives) ... remember that in those days, memory was limited and quite possibly not enough to accommodate a file in its entirety.
intersante The moral of this story is that the updating of files on tape is something to be avoided. One way to minimize it is by the use of "permanent files."
Consider the case of a Personal Monitoring System. There are a lot of information about each employee, but does not recruit or lay off workers on a daily basis, so that information is not very amenable to change. These files are called "permanent."
In contrast, there is frequently changing information such as hours worked per day, holidays, etc.. This information is saved in "files of movement." The daily task is to update system files and then generate outputs motion based on these, with reference to the permanent. For example, the output "Pedro Perez has 4 hours of exta time these days" is generated on the basis of continuous information on Pedro Perez (including their normal working hours) and hours worked which were recorded daily (motion).
The advantage of using permanent files is that we can store more than one file on the same tape, saving readers. In case you need to update (recruitment of new employees, for example), there to produce a new tape (permanent) which is an updated version of the old.
The moral is this: The system must be designed movements trying to concentrate on the smallest possible amount of files, maximizing the use of permanent files.
PROGRAM FILES
We are made to the idea that our PC is a "solve it all" and therefore must accommodate a vast number of programs but only a few we use in daily practice. This follows from the fact of being, our PC, (and as its name suggests) a "Personal Computer."
In the industrial world is more common to find computers (even PCs) dedicated to a single function, but even in such cases their operating systems (Windows, Linux, etc.) come with a huge amount of pre-installed programs to provide various services and even applications platform.
This is not (permanently) for Heritage / 1, as it was not that of their counterparts back in the early 70s. If
Heritage / 1 service with a public library, for example, would be devoted entirely to keep the library catalog, ie one and only one application for the rest of your life.
recidirían data files in various V-TAPES perennial mounted in their respective readers. Two V-tapes would be earmarked for additional programs: one for the OS (H1-OS) and one for the application itself. Whenever
had to start the system, the operator would enter the "Loader" in memory using the console for it and this little program is responsible for H1-OS loaded into memory from V-TAPE. Surely H1-OS consists of several files of programs, all of them city residents in the same V-TAPE ... or maybe take a few and then the operator has to do juggling changing V-Tapes in various readers over the load.
with the OS already running, the ride operator V-TAPE of the application, and hit command, instruct H1-OS to load in memory. By then the V-TAPES data would be mounted in their respective readers and clients of the library could then make use of the system from terminals installed in the lounge.
How many readers?
I thought about a initial SUPPLIES 8-V-TAPE readers. This magic number comes from a very simple consideration: that is what fits comfortably in 19 inch rack. Surely one will be reserved for the OS, so I have 7 to my database applications, whatever they are.
Interestingly, I am designing a simple database to solve a particular problem persornal and so far, I'm just taking up 5 files ... so the restriction has not proved to be too severe ... so far.
Before a hard drive, Heritage / 1 will be V-tapes for storage. I mentioned that a V-TAPE is (or will be) a 3.5-inch diskette with a file system (owner) designed to simulate a magnetic tape.
So far, I imagined that a V-TAPE only hosts to a file, but this is a waste of space, obviously it is quite possible, therefore, to design the file system so as to allow more than one file for V -TAPE. Suppose that is the case, then my choice will host one or more of a V-TAPE. I think in the case of databases (or data files in general) prefer a single file by V-TAPE and these files will grow over time, while for several files preferred programs by V-TAPE and that they are fixed length so I can make better use of disk space.
The reader will wonder what is problem if the diskette is a random access medium. The answer is that a V-TAPE is no such thing as its file system forces you to behave like a tape and is as such qie no-brainer. The tape is a sequential access medium, can personally do not but only inserts data at the end to add or replace the file in its entirety. From there to host several files on one tape impossible to update them. I am aware that these procedures look archaic in the ear of a modern man, but this is it: to savor the past problems.
DATABASES
will adevertido the curious reader of history that the old computer centers employing a large number of tape drives. I think not so much to increase the total capacity of the system but, rather, to allow operation with a large number of files simultaneously.
Take the case of an application of typical database. Today we are used to design databases in line with the relational model, we think of "stalemate" divided into "columns", we set vínclulos between them, and we worry about the "plumbing" underlying it as the employee DBMS (MySQL, MS SQL Server, etc) is responsible for those underage. The truth is that almost all modern DBMS creates a disk file for each "table" in our database, so an application, which typically employs a large number of tables, use a large number of files on disk.
We will not use disk, but V-tapes, and we have seen that each file uses a V-TAPE for themselves, so that a large number of files means a large number of V-TAPES, which is the same as saying: a large number of readers of diskettes. This imposes a serious constraint to the design of our databases, you must use the least amount of "Tables."
Incidentally, we will not use the Relational Model, but as this is the hierarchical role in times of Heritage / 1.
PERMANENT FILES AND FILES MOTION
Here is a concept lost in time because today, updating a file on disk is no problem. But this was not the case at the time of the tapes.
We have seen that the only way to update a tape that contains multiple files is to replace all their new versions, ie the tape in its entirety. In practice, this means, rather, read the old tape while creating a new one with the new data (Using two tape drives) ... remember that in those days, memory was limited and quite possibly not enough to accommodate a file in its entirety.
intersante The moral of this story is that the updating of files on tape is something to be avoided. One way to minimize it is by the use of "permanent files."
Consider the case of a Personal Monitoring System. There are a lot of information about each employee, but does not recruit or lay off workers on a daily basis, so that information is not very amenable to change. These files are called "permanent."
In contrast, there is frequently changing information such as hours worked per day, holidays, etc.. This information is saved in "files of movement." The daily task is to update system files and then generate outputs motion based on these, with reference to the permanent. For example, the output "Pedro Perez has 4 hours of exta time these days" is generated on the basis of continuous information on Pedro Perez (including their normal working hours) and hours worked which were recorded daily (motion).
The advantage of using permanent files is that we can store more than one file on the same tape, saving readers. In case you need to update (recruitment of new employees, for example), there to produce a new tape (permanent) which is an updated version of the old.
The moral is this: The system must be designed movements trying to concentrate on the smallest possible amount of files, maximizing the use of permanent files.
PROGRAM FILES
We are made to the idea that our PC is a "solve it all" and therefore must accommodate a vast number of programs but only a few we use in daily practice. This follows from the fact of being, our PC, (and as its name suggests) a "Personal Computer."
In the industrial world is more common to find computers (even PCs) dedicated to a single function, but even in such cases their operating systems (Windows, Linux, etc.) come with a huge amount of pre-installed programs to provide various services and even applications platform.
This is not (permanently) for Heritage / 1, as it was not that of their counterparts back in the early 70s. If
Heritage / 1 service with a public library, for example, would be devoted entirely to keep the library catalog, ie one and only one application for the rest of your life.
recidirían data files in various V-TAPES perennial mounted in their respective readers. Two V-tapes would be earmarked for additional programs: one for the OS (H1-OS) and one for the application itself. Whenever
had to start the system, the operator would enter the "Loader" in memory using the console for it and this little program is responsible for H1-OS loaded into memory from V-TAPE. Surely H1-OS consists of several files of programs, all of them city residents in the same V-TAPE ... or maybe take a few and then the operator has to do juggling changing V-Tapes in various readers over the load.
with the OS already running, the ride operator V-TAPE of the application, and hit command, instruct H1-OS to load in memory. By then the V-TAPES data would be mounted in their respective readers and clients of the library could then make use of the system from terminals installed in the lounge.
How many readers?
I thought about a initial SUPPLIES 8-V-TAPE readers. This magic number comes from a very simple consideration: that is what fits comfortably in 19 inch rack. Surely one will be reserved for the OS, so I have 7 to my database applications, whatever they are.
Interestingly, I am designing a simple database to solve a particular problem persornal and so far, I'm just taking up 5 files ... so the restriction has not proved to be too severe ... so far.
Sunday, December 6, 2009
What Is Difference Between Cold Sore & Impetigo?
H1-OS: Housing Programs in Memory
times to start writing an operating system (H1-OS) for Heritage / 1 are far yet, but peek at the issue now has served me from design to ensure proper hardware support. For example, thinking about how an operating system would be to host various programs in memory, I have come to understand the need to include "relative addressing" in the design of the CPU instruction set. ON ADDRESSING
The program counter (PC) Heritage / 1 is designed to increase in a unit (PC = PC + 1), loaded with a given address (PC = ADDR), or to add your content under a given operation (PC = PC + OFFSET). This corresponds to the addressing modes: implicit, direct and on respectively.
A program that uses only relative jumps easily be located anywhere within the memory as the program does not care "where to jump," but "how far", as illustrated in the following example.
If this segment of the program is loaded into an address other than 400, the jump will still work well, which was the purpose. However, we can not say the same if we place the variable in a different direction to 1000.
Unfortunately (and for reasons of economy) I'm just implementing relative addressing specifically for PC and jump instructions. The solution for the relocation of variables comes from the way in which H1-OS programs loaded into memory and how these are structured.
ANATOMY OF A PROGRAM IN HERITAGE / 1
recycle programs in "storage" (V-TAPE) as files. These consist of a label (LABEL), a body that contains the executable code (Code) and an end of file character (EOF).
The label contains information about the program, including the requirements of your data area, ie the size of the stack. When loaded into memory, the operating system (H1-OS) hosting the code on the first available memory and another portion reserved for data area (Stack) as illustrated below.
As this is not the only program coexisting in memory, H1-OS maintains a table of all loaded programs, saving her address Stack the Code and allocated. Before passing control to the program, load the SP value (SP '= OS_VAR + TAMAÑO_ASIGNADO_AL_STACK).
When the program goes into execution, the SP register is pointed to the bottom of the stack, the first thing the program does the task of initialization is organized. Take for example (see figure) that this program uses two global variables: an integer of 16 bits called N, and a 32-bit dual-called X. To allocate memory for both variables, the program runs three steps above SP (SP = SP '-3). From that point, the stack itself is "above" the global variables of the program. The program then accesses the variables as "Distances" on the current position of SP. The following will
example illustrates the above.
A program typically calls functions using the CALL statement endorsed. When this happens, both F PC automatically be saved to the stack (just above the area variable because there was where he was positioned.) The code for the SP move in turn to create your own data area as did the main program and accessed the same way. When the function returns, this area will be released queando SP in its original position, ie just above the global variables. This process can be repeated recursively until the size of the stack permits.
multiple instances
H1-OS allows multiple instances of programs, namely the existence of multiple copies of the same program running simultaneously in memory. What we have done so far (to load a program into memory) is not is to create a first "instance" of it. To create a second, do not bring back the code from V-TAPE but only create a second stack for the same code. Thus, both instances use the same copy of the code but will have separate data areas, one for each instance.
The multiple instantiation is vital for the Multi-Processing, as explained in the next article.
The program counter (PC) Heritage / 1 is designed to increase in a unit (PC = PC + 1), loaded with a given address (PC = ADDR), or to add your content under a given operation (PC = PC + OFFSET). This corresponds to the addressing modes: implicit, direct and on respectively.
A program that uses only relative jumps easily be located anywhere within the memory as the program does not care "where to jump," but "how far", as illustrated in the following example.
400 mvi d, 1000; variable points to 402 in 1000
ldx a, d; read variable (indirect) 403
jr 5, 5-way jumps forward
...
408; the jump occurrirá this direction.
If this segment of the program is loaded into an address other than 400, the jump will still work well, which was the purpose. However, we can not say the same if we place the variable in a different direction to 1000.
Unfortunately (and for reasons of economy) I'm just implementing relative addressing specifically for PC and jump instructions. The solution for the relocation of variables comes from the way in which H1-OS programs loaded into memory and how these are structured.
ANATOMY OF A PROGRAM IN HERITAGE / 1
recycle programs in "storage" (V-TAPE) as files. These consist of a label (LABEL), a body that contains the executable code (Code) and an end of file character (EOF).
The label contains information about the program, including the requirements of your data area, ie the size of the stack. When loaded into memory, the operating system (H1-OS) hosting the code on the first available memory and another portion reserved for data area (Stack) as illustrated below.
As this is not the only program coexisting in memory, H1-OS maintains a table of all loaded programs, saving her address Stack the Code and allocated. Before passing control to the program, load the SP value (SP '= OS_VAR + TAMAÑO_ASIGNADO_AL_STACK).
When the program goes into execution, the SP register is pointed to the bottom of the stack, the first thing the program does the task of initialization is organized. Take for example (see figure) that this program uses two global variables: an integer of 16 bits called N, and a 32-bit dual-called X. To allocate memory for both variables, the program runs three steps above SP (SP = SP '-3). From that point, the stack itself is "above" the global variables of the program. The program then accesses the variables as "Distances" on the current position of SP. The following will
example illustrates the above.
; Reservation memory for global variables.
dec sp; Reserved memory for N
dec sp dec sp; Reserved
X memory ...
, Lee N variable A:
mov a, sp; A = SP
adi a, 3, A = A + 3
mov d, a, D = A (ie, SP + 3) ldx
a, d A = [D] (ie, variable N)
A program typically calls functions using the CALL statement endorsed. When this happens, both F PC automatically be saved to the stack (just above the area variable because there was where he was positioned.) The code for the SP move in turn to create your own data area as did the main program and accessed the same way. When the function returns, this area will be released queando SP in its original position, ie just above the global variables. This process can be repeated recursively until the size of the stack permits.
multiple instances
H1-OS allows multiple instances of programs, namely the existence of multiple copies of the same program running simultaneously in memory. What we have done so far (to load a program into memory) is not is to create a first "instance" of it. To create a second, do not bring back the code from V-TAPE but only create a second stack for the same code. Thus, both instances use the same copy of the code but will have separate data areas, one for each instance.
The multiple instantiation is vital for the Multi-Processing, as explained in the next article.
Saturday, December 5, 2009
Philippines Driving License Template
When Heritage / 1 exists as hardware (May 2010), it's time to start writing software for it. But how software is written for a proprietary platform? Because conventional generating system development Code for a specific operating system and / or specific CPU, not for a newborn and unique environment that no one has heard.
An alternative is to write everything down into machine code, but in this way can not go very far. Another option is to write one of their own development tools, but this is a project in itself, much more complex than the software to be written later.
Fortunately there are tools for this task, both commercial and Open Source, and here we get into the world of assemblers "Forward" (retargetable) and the task of "carrying" a source platform to another.
Assume that we have solved that problem, so now we can write assembly code and generate binary files that the CPU Heritage / 1 to run. This development we do using a PC, of \u200b\u200bcourse, the resulting binary files are forwarded to the memory of Heritage / 1 ... somehow. A BRAIN
initially empty
One of my premises for the design of Heritage / 1 has been that the Software is something "outside" the computer. This implies that H1 has no memory ROM or other medium for storing a program started. When the computer boots up, nothing happens beyond a hardware reset, the computer wakes with zero intelligence.
The software resides as in "storage" (possibly V-TAPES) but to deliver it to memory a program is needed to start or "Loader" (magazine). This program is really small and goes through the console, word for word, bit by bit (it is the task of the operator every morning to his arrival in the calculation). Once into the loader, the operator presses the START button with which the CPU starts to execute. And running, the Loader is responsible for loading the OS from a V-TAPE previously placed in your reader. With the operating system loaded and running, Heritage / 1 have won the necessary intelligence to begin his work useful.
FIRST STEP: BATCH PROCESSING
Hopefully tezón and patience, an operating system itself will appear in due course. But there will be an initial period of experimentation, of course, where all my programs will be entered directly into memory, bit by bit, using the Console. In these circumstances I shall create my access to "storage" and / or (maybe) to a serial port in order to provide a pathway for entry into large software memory from an external medium such as a PC or a V-TAPE. Once this foundation raised, I will be able to build a software system more or less serious.
development
My plan is to make Heritage / 1 cross staggered certain "historical stages" in terms of mode of operation is concerned.
History teaches us that for a decade (1950s) computers worked only in batch mode. This refers to the ability of a computer to run several programs in cascade, one after another, without human intervention between the end of one and pulled the next.
Today we seem to "batch" is the opposite of "interactive", but at the time of its introduction (early 50s), "batch processing" represented an alternative to having to stop the machine every time a program to end and taking the time to prepare for the next program. The batch process achieved catkins overall efficiency of the machine to minimize the cost of downtime CPU. A computer like
Heritage / 1 could serve in a data center in those days, operating in batch mode. Instead of users, professional traders would, and instead of operating system, a simple program "Monitor" charge to read the specifications of the programs to run and manage their load and run automatically in response to priorities and resource availability.
True Heritage / 1 will therefore have the "Batch Monitor" and operate in batch mode for some time. Will this be the time to refine programming techniques and (no doubt) make adjustments to the hardware of the CPU and peripherals.
Then comes a stage much more interesting: Multiprocessing, but more of that in a separate article, given the size of the item.
Subscribe to:
Posts (Atom)