Sunday, January 26, 2020

Concurrent Processes In Operating Systems

Concurrent Processes In Operating Systems The programming technique, to use interrupts to simulate the concurrent execution of several programs on Atlas computers was known as multiprogramming. It was pioneered by Tom Kilburn and David Howarth. Multiprogramming in early days was done using assembly level language. Slightest mistake in programs could make program unpredictable hence testing them was difficult also the assembly level language had no conceptual foundation. Operating systems designed using this multiprogramming techniques grew very huge and unpredictable their designers spoke about software crisis. This created an urgent research and development need for concurrent programming techniques. Computer scientists took the first step towards understanding the issues related to concurrent programming during mid 1960s, they discovered fundamental concepts, expressed them by programming notation, included them in programming languages and used these languages to write the model operating systems. These same concepts were then applied to any form of parallel computing. Introduction of Concurrent processes in operating systems Processes played a key role in shaping early operating systems. They were generally run in a strictly sequential order. Multiprogramming existed but the processes did not exactly run concurrently instead a time based mechanism was used in which a limited amount of time was given to each process. Even in those days the processors speed was fast enough to give and illusion that the multiple processes were running concurrently. They were called as timesharing or multiprogramming operating systems (November 1961, called CTSS Compatible Time-Sharing System also Multics the predecessors of UNIX developed by MIT) These type operating systems were very popular and were seen as a breakthrough during those times. The major drawback was complexity of the system design which made it difficult to make it more versatile and flexible so that a single all purpose OS could be built. Also the resource sharing done by these processes was primitive or inefficient and it only showed there was a lot of room for research and development. Work on these operating systems made way for concurrent processes. Most of the original concepts related to concurrency were developed during this period. These innovative ideas and concepts went on become the basic principles on which todays operating systems and concurrent applications are designed. (A major project undertaken by IBM in this direction was in 1964 the OS/360 for their new mainframes system 360) To build reliable concurrent processes understanding and developing basic concepts for concurrency was important let us talk about concurrency and some of its basic programming concepts. Concurrency In computer science, concurrency is a property of systems in which several computations are executing simultaneously, and potentially interacting with each other. [Wikipedia] Let us consider a real life example a housing project such as the building of a house will require some work to go on in parallel with other works. In principle, a project like building a house does not require any concurrent activity, but a desirable feature of such a project is that the whole task can be completed in shorter time by allowing various sub tasks to be carried out concurrently. There is no reason any painter cannot paint the house from outside (weather permitting!), while the plasterer is busy in the upstairs rooms and the joiner is fitting the kitchen units downstairs. There are however some constraints on concurrency which is possible. The brick layer will normally have to wait until the foundation of the house had been layered before he could begin the task of building the walls. The various tasks involved in such a project can usually be regarded as independent of one another, but the scheduling of the tasks is constrained by notions of a task A must be completed b efore task B can begin A second example is that of a railway network. A number of trains making journeys within a railway network, and by contrast with the previous example, when they start and they end is generally independent of most of the other journeys. Where the journeys interact though is at places where routes cross or use common sections of track for parts of journeys. We can in this example regard the movement of trains as programs in execution, and the sections of track as the resources which these programs may or may not have to share with other programs. Hence the two trains run concurrently in case their routes interact sharing the same resources without interrupting each other similar to concurrent processes in operating systems. So as discussed earlier we understand that processes are important to implement concurrency so let us discuss the process as a concept which will introduce us to the most important concept for concurrency i.e. threads! Fundamental concepts Process A process is a running program; OS keeps track of running programs in form of processes and their data. A process is made of multiple threads. Threads The need to write concurrent applications introduced threads. In other words, threads are processes that share a single address space. Each thread has its own program counter and stack. Threads are often called lightweight processes as N threads have 1 page table, 1 address space and 1 PID while N processes have N page tables, N address spaces and N PIDs. Therefore, a sequence of executing instructions is called a thread that runs independently of other threads and yet can share data with other threads directly. A thread is contained inside a process. There can exist multiple threads within a process that share resources like memory, while different processes do not share these resources. A simple thread example There are two classes defined in this example namely SimpleThread which is a subclass of the Thread class and TwoThreads class. class SimpleThread extends Thread { public SimpleThread(String str) { super(str); } public void run() { for (int i = 0; i { System.out.println(i + + getName()); Try { sleep((int)(Math.random() * 1000)); } catch (InterruptedException e) {} } System.out.println(DONE! + getName()); } } The method SimpleThread() is a constructor which sets the Threads name used later in the program. The action takes place in the run() method which contains a for loop that iterates ten times that displays the iteration number and the name of the Thread, then sleeps for a random interval of up to a second. The TwoThreads class provides a main() method that creates two SimpleThread threads named London and NewYork. class TwoThreads { public static void main (String[] args) { new SimpleThread(London).start(); new SimpleThread(NewYork).start(); } } The main() method also starts each thread immediately following its construction by calling the start() method. Following concepts are mostly used at the thread level and also the issues discussed are encountered while implementing concurrency. Race condition A race condition occurs when multiple processes access and manipulate the same data concurrently, and the outcome of the execution depends on the particular order in which the access takes place.[http://www.topbits.com/race-condition.html] It is not so easy to detect race condition during program execution if it is observed that the value of shared variables is unpredictable, it may be caused because of race condition. In concurrent programming there are more than one legal possible thread executions hence order of thread execution cannot be predicted. Race condition may produce uncertain results. Outcome of race condition may occur after a long time. In order to prevent unpredictable results because of race condition, following methods are used- Mutual exclusion Mutual exclusion (often abbreviated to mutex) algorithms are used in concurrent programming to avoid the simultaneous use of a common resource, such as a global variable, by pieces of computer code called critical sections. (Wikipedia) -Critical Region (CR) A part of code that is always executed under mutual exclusion is called a critical region. Due to this, the compiler instead of the programmer is supposed to check that the resource is neither being used nor referred to outside its critical regions. While programming, critical section resides when semaphores are used. CRs are needed only if the data is writeable. It consists of two parts: Variables: These must be accessed under mutual exclusion. New language statement: It identifies a critical section that has access to variables. There are two processes namely A and B that contain critical regions i.e. the code where shared data is readable and writable. -Semaphores Semaphores are mechanisms which protect critical sections and can be used to implement condition synchronization. Semaphore encapsulates the shared variable and using semaphore, only allowed set of operations can be carried out. It can suspend or wake processes. The two operations performed using semaphores are wait and signal, also known as P and V respectively. When a process performs P operation it notifies semaphore that it wants to use the shared resource, if the semaphore is free the process gains access to the shared variable and semaphore is decremented by one else the process is delayed. If V operation is performed, then the process notifies the semaphore that it has finished using shared variable and semaphore value is incremented by one. By using semaphores, we attempt to avoid other multi-programming problem of Starvation. There are two kinds of Semaphores: Binary semaphores: Control access to a single resource, taking the value of 0 (resource is in use) or 1 (resource is available). Counting semaphores: Control access to multiple resources, thus assuming a range of nonnegative values. -Locks The most common way to implement mutex is using locks. A lock can be either locked or unlocked. The concept is analogues to locks we use in our doors; a person enters the room, locks the door and starts working and leaves the room after finishing the job, if another person wants to enter the room when one person is already inside, he has to wait until the door gets unlocked. Subtasks in a parallel program are often called threads. Smaller, lightweight versions of threads are known as fibres, which are used by some parallel computer architecture and bigger versions are called as processes. Many times threads need to change the value of shared variable, instruction interleaving between programs could be in any order For example, consider the following program: Thread A Thread B 1A -Read variable X 1B Read variable X 2A Increment value of X by 1 2B Increment value of X by 1 3A Write back to variable X 3B Write back to variable X As we can see in the example both the threads are carrying out same steps which are to read the shared variable, increment its value and write back its value to the same variable. It is clear how vital it is to execute these instructions in correct order, for instance if instruction 1A is executed between 1B and 3B it will generate an incorrect output. If locks are used by one thread, another thread cannot read, write the shared variable. Following example explains usage of locks: Thread A Thread B 1A Lock variable X 1B Lock variable X 2A Read variable X 2B Read variable X 3A Increment value of X by 1 3B Increment value of X by 1 4A Write back to variable X 4B Write back to variable X 5A Unlock variable X 5B Unlock variable X Whichever thread locks the variable first, uses that variable exclusively, any other thread will not be able to gain access to shared variable until it is unlocked again. Locks are useful for correct execution but on the other hand they slow down the program. -Monitors A monitor is a mutual exclusion enforcing synchronization construct. Monitors provide more structure than conditional critical regions and can be implemented as efficiently as semaphores. Monitors are supported by a programming language rather than by the operating system. They were introduced in Concurrent Pascal and are used as the synchronization mechanism in the Java language. A monitor consists of code and data. All of the data and some of the code can be private to the monitor, accessible only to the code that is part of the monitor. Monitor has a single lock that must be acquired by the task to execute monitor code i.e. mutual exclusion is provided by making sure that execution of procedures in the same monitor are not overlapped. Active task is the term used for the task which owns the monitor lock. There cannot be more than one active task in the monitor. The monitors lock can be acquired by a task through one of several monitor queues. It gives up the lock either by blocking a condition variable or by returning from a monitor method. A condition variable is a queue or event queue that is part of the monitor. Two monitor methods called as wait and notify can only be accessed by a condition variable queue. The behaviour of a monitor is known by the relative priorities and scheduling of various types of queues. The monitor locks are acquired by the processes in the monitor queues. The queues may be combined in some implementations. The tasks compete for the lock when the monitor lock becomes free. Condition Variable: In order to make sure that processes do not enter a busy waiting state, they should notify some events to each other; this facility is provided by Monitors with the help of condition variables. If a monitor function wants to proceed by making a condition true then it has to wait for the corresponding condition variable. When a process waits, it gives up the lock and is taken out from set of runnable processes. When a process makes condition true then it notifies a waiting process using condition variable. The methods mentioned above are used to prevent race condition but they might result into serious problems like deadlock and starvation let us have a look at these problems one at a time as we go further. Deadlock Deadlock refers to a specific condition where two or more processes are each waiting for each other to release a resource, or more than two processes are waiting for resources in a circular chain. Conditions for deadlock to occur 1] Mutual exclusion: Mutual exclusion means only one process can use a resource at a time. 2] Hold and wait: A process may hold a allocated resource while awaiting assignment of other resource. 3] No pre-emption: A resource can be released voluntarily by the process holding it. One process cannot use resource forcefully held by another process. A process that receives such resources cannot be interrupted until it is finished using the resource. 4] Circular wait: A closed chain of processes exists, such that each process holds a resource required by another process in the chain. Deadlock occurs only when circular wait condition is not resolvable and circular wait is not resolvable if first three conditions hold hence all four conditions taken together constitute necessary and sufficient condition for deadlock. In the diagram above we can see that process P1 holds resource R1 and requests for resource R2 held by process P2 , and process P2 is requesting for resource R1. Methods to handle Deadlock 1. Deadlock prevention Deadlock prevention is to ensure that one of the four necessary conditions for deadlock can never hold in following ways: I1. Mutual exclusion: allocate one resource to only one process at a time. 2. Hold and wait: It requires a process to request and be allocated its resources before it begins its execution, or allow process to request a resource only when process has none. This may lead to low resource utilization. It also may give rise to starvation problem, a process may be held for a long time waiting for all its required resources. The application need to be aware of all the resources it requires, if it needs additional resources it releases all the resources held and then requests for all those it needs. 3. No pre-emption: If a process is holding some resources and requests for another resource held by some other process that cannot be allocated to it, then it releases all the resources currently held. The state of pre-empted resource has to be saved and later restored. 4. Circular wait: To make this condition fail, we can impose a total ordering on all resources types. It is also required that each process requests resources in strict increasing order. Resources from the same resource type have to be requested together. 2. Deadlock avoidance In deadlock avoidance, the system checks if granting a request is safe or not . The system needs additional prior information regarding overall potential use of each resource for each process i.e. maximum requirement of each resource has to be stated in advance by each process. 3. Deadlock detection: It is important to know if there exists a deadlock situation in the system hence an algorithm is needed to periodically check existence deadlock. Recovery from deadlock To recover from deadlock, the process can be terminated or we can pre-empt the resource. In terminating processes method we can terminate all the processes at once or terminate one process and then again check for deadlock. Similarly there are mechanisms like fair scheduling that can be used to avoid starvation of resources. -Fair scheduling Fair scheduling is to allow multiple processes to fairly share the resources. The main idea is to ensure each thread gets equal CPU time and to minimize resource starvation. -First in first out (FIFO) FIFO or First Come, First Served (FCFS) is the simplest scheduling algorithm that queues processes in the order they arrive in the ready queue. Scheduling overhead is minimal because context switches occur only when process terminates and re-organization of the process queue is not required. In this scheme, completion of every process is possible, hence no starvation. -Shortest remaining time With this scheduling scheme, processes with least processing time are arranged as the next process in the queue. To achieve this, prior knowledge of completion time is required. Consider a scenario where a shorter process arrives when another process is running, in this case the current process is stopped and is divided into two parts. This results in additional context switching overhead. -Fixed priority pre-emptive scheduling The operating system gives a fixed priority rank to every process, and the processes are arranged in the ready queue based on their priority this results in higher priority processes interrupting lower priority processes. Waiting and response times are inversely proportional to priority of the process. If there are more high priority processes than low priority processes, it may result into starvation of the latter processes. -Round-robin scheduling In this scheduling algorithm, each process is allotted a fixed time unit. There could be extra overhead if time unit per process allotted is very small. Round robin has better average response time than rest of the scheduling algorithms. There cannot be starvation since processes are queued based on any priority. Also there are some desired Properties of Concurrent Programs; these properties will ensure a reliable concurrent program. There are some characteristics that a concurrent program must possess. They can be either a safety or a liveness property. Safety properties assert that nothing bad will ever happen during a program execution. Examples of safety property are: à ¢Ã¢â€š ¬Ã‚ ¢ Mutual exclusion à ¢Ã¢â€š ¬Ã‚ ¢ No deadlock à ¢Ã¢â€š ¬Ã‚ ¢ Partial correctness A safety property is a condition that is true at all points in the execution of a program. Liveness properties assert that something good will eventually happen during a program execution. Examples include: à ¢Ã¢â€š ¬Ã‚ ¢ Fairness (weak) à ¢Ã¢â€š ¬Ã‚ ¢ Reliable communication à ¢Ã¢â€š ¬Ã‚ ¢ Total correctness Communicating sequential process Communicating sequential process was introduced in a paper written by C. A. R. Hoare in 1978. In this paper he described how various sequential processes could run in parallel irrespective of the processor (i.e. it can be a single core or multi-core processor). CSP is an integration of two terms, Communication and Sequential process. A communication is an event that is described by a pair C, V, where C is the name of the channel on which communication takes place and V is the value of the message which passes through this channel by C .A. R. Hoare. In a Sequential Process new process cannot be started until the preceding process has completed. As CSP was more of a programming language so most of the syntax and notations were inherited from ALGOL 60 programming language. Most of the notations were single character instead of English words. For example,? and ! represents input and output respectively. CSP inherits the concept of Co routines over old programming structures such as subroutines. The structure of Co routines is comprised of COPY (copies character from output of one process to the input of second process), SQUASH is used to replace specified character with other characters, DISASSEMBLE, ASSEMBLE and REFORMAT. -OCCAM One of the renowned implementation of CSP is occam. It is named after William of Ockam. It is a strict procedural language. It was developed at INMOS. Occam2 programming language is used in most of the software developing companies across the world. It is an extension of occam1 which lacks multi-dimension arrays, functions and other data type support. Occam2 came into existence in 1987s. The latest version is occam2.1 which was developed in 1994. BYTESIN operator, fixed-length array returned from procedures, named data types etc. were some of the new features of occame2.1. the compiler designed for occam2.1 named KRoC (Kent Retargetable occam Compiler) is used to create machine code from different microprocessors. Occam-pi is the name of the new occam variant which is influenced by pi-calculus. It is implemented by newer versions of KRoC. JCSP Java programming language also implements the concept of CSP by JCSP. JCSP is a complete programming implementation of CSP i.e. it does not contain deep mathematical algebra. JCSP is used to avoid race condition, deadlock, live lock and starvation programmatically via java programs. The main advantage of JCSP is that most of the algebraic part is already developed and stored in libraries so the programmer does not require strong mathematical skills. To invoke a method he needs to import these inbuilt libraries. Concurrency Test Tools Design a concurrent application is very challenging task. Maintaining interaction between concurrently executing threads is very difficult task for programmer. It is very difficult to understand the nature of threads from one run of a program as they are nondeterministic. As result, it becomes very difficult for testing and debugging. So it is good idea to invest in techniques which can avoid this conditions aid in the process of development. We are exploring these ideas with tools for concurrency. CHESS This is one of the important tools, created by Microsoft Research, which is used to test multithreaded code systematically. CHESS facilitates both model checking and dynamic analysis. It has the potential to detect race conditions, livelocks, hangs, deadlocks and data corruption issues. Concurrency errors are detected by investigating thread schedules and interleaving and for this it chooses a specialized scheduler on which it repeatedly runs regular unit test. The specialized scheduler creates specific thread interleaving. CHESS controls state space explosion using iterative context bounding which puts a limitation on number of thread switching. This supports scientifically experimented concept that most of the concurrency bugs can be revealed with less number of thread switches. This concept is far better than traditional model checking. CHESS uses Goldilocks lockset algorithm to detect deadlock and race condition. For reporting a livelock, it is anticipated that programmes terminate and exhibit fairness for all threads. THE INTEL THREAD CHECKER Similar to CHESS, INTEL THREAD CHECKER is used for detecting problems in concurrency like data races and deadlock and it also finds out erroneous synchronization. The thread checker makes use of source code or the compiled binary for making memory references and to monitor WIN32 synchronization primitive. At the time of execution, information given by the compiled binary is used for constructing partial order of execution; this step is followed by happens before analysis of the partial order obtained. For improving efficiency and performance, it is better to remember latest access to shared variable than to remember all accesses. The disadvantage of this tool is it cannot find all bugs while analysing long-running applications. RACERX Unlike first two dynamic analysis tools we have discussed above, RACERX is a static analysis tool. It is not required to comment the entire source code rather user gives table which contains specification of APIs which are useful in gaining and releasing locks. Using such small sized tables proves to be advantageous because they lessen the overhead of annotating entire source code. The working of RACERX is carried out in several phases. In the first phase, RACERX builds a Control Flow Graph once it has iterated through each source code file. CFG consists of information about function calls, use of pointers, shared memory and other data. When building CFG is done, calls to these APIs are marked. This first phase is followed by analysis phase which involves checking race condition and deadlock. The last phase is post processing errors reported, the purpose is to prioritize errors by their significance and harmfulness. CHORD This tool is used for Java language, it is context sensitive static analysis tool. Its flow insensitive nature makes it more scalable than other static tools with the disadvantage of low accuracy. It also deals with the distinguishing synchronization primitives available in Java. ZING ZING, a pure model checker tool, verifies the design of multi threaded programs. It has the ability to model concurrent state machines using its own language that describes complex states and transition. It assures the design quality by verifying assumptions and confirming the presence or absence of some conditions. KISS Microsoft Research developed another model checker tool, named KISS (Keep It Simple and Sequential) for concurrent C programs. It converts a concurrent C program into a sequential program that features the operation of interleaving and controls non-determinism. Thereafter, the analysis is performed by a sequential model checker. While using this tool, the programmer is expected to justify the validation of concurrency assumptions. Introduction of multi-core processors increased the importance on concurrency by many folds. Concurrency and multicore processor Multi core processors The computer industry is undergoing a paradigm shift. Chip manufacturers are shifting development resources away from single-processor chips to a new generation of multi-processor chips known as multicores. Multiple processors are manufactured by placing them on the same die. Hence they share the same circuit. A die is a small block of semiconducting material, on which a given functional circuit is fabricated. A) Single Core B) Multi Core Why were they introduced? As we grow further in terms of processing power the hardware industry faces three main challenges Power Amount of power consumed by processors has been increasing as more and more powerful processors have been introduced to the market. The environment cost and the energy needs have compelled the manufacturer as well as organisations to reconsider their strategies to an extent where change in way the processors are manufactured or operate was inevitable. Processors can be overclocked or underclocked. Overclocking a processor increases the number of instructions it can execute but at the same time increases the power consumption; also overclocking a processor does not guarantee a performance improvement as there are many other factors to consider. Increasing the number of processors per core (quad or eight) will further improve the power to performance ratio. Memory clock Memory clock has not improved like the CPU clock hence adding a limitation on the processor performance. Often the instruction to be fetched must be retrieved from relatively slow memory, causing the CPU to stall while waiting for the instruction to be returned. So instead of building faster CPUs underclock it and have more number of cores with their own dedicated memories to have more instructions executed in the same given time. Also the clock speed in itself wont grow infinitely due to fundamental physics it has hit a wall. Chips melt above 5GHz of clock speed. Many possibilities are opened by placing two or more powerful computing cores on a single processor. True concurrent applications can be developed only on multicore processors. On single core processors concurrent applications can overload the processor degrading the performance of the application. On multi-core systems, since each core has its own cache, the operating system has sufficient resources to handle most compute intensive tasks in parallel. What are the effects of the hardware shift on concurrent programming? The free lunch of performance in terms of ever faster processors is over- Microsoft C++ guru Herb Sutter. For past five decades the ever increasing clock speed has carried the software industry through its progress but now the time has come for the software engineers to face the challenge staring directly at them which they have managed to ignore so far. Also as more and more cores are added to hardware the gap between the hardware potential and the s

Saturday, January 18, 2020

Converting Paper Records to a Computer Based Health Record Essay

Traditional utilization of paper based medical records leads to the dispersion of clinical information as a result of the heterogeneous character of hospital systems. Due to this, the development of a clinical information system that can integrate hospital information as well as enable cooperation amongst legacy systems became a difficult task. System integration as well as the development of an efficient clinical information management system was thereby dependent upon the creation of conceptual and architectural tools that will enable such an integration. In line with this, many healthcare institutions are currently seeking to establish the integration of their workstations through the utilization of technological tools. Such tools are effective in the arrangement of clinical matters as well as in the arrangement of administrative and financial information. Clinical information systems are utilized by healthcare institutions in their integration of information. At this point, the utilization of electronic medical systems in healthcare delivery is evident in countries such as the United States, United Kingdom, Sweden, Hong Kong, Canada, as well as Australia. The current shift from a human memory based paradigm to a technological paradigm can be traced to the recent emphasis given on health care quality improvement and cost reduction. In lieu of this, policymakers started to adopt health information technology such as the Electronic Medical Record (EMR). According to Tim Scott in Implementing an Electronic Medical Record System, most information regarding the use of EMR systems are derived from the Regenstrief Institute, Brigham and Women’s Hospital, the Department of Veterans Affairs, LDS Hospital, and Kaiser Permanente. The information derived from the following medical institutions shows the following. First, success is dependent upon the organizational tools rather than on the type of technology used. Second, minimal changes were noted in terms of increase of quality and efficiency as a result of the system’s adaptation. Such findings thereby led to the slow adoption and implementation of EMR systems since majority of medical institutions as well as healthcare systems required the high verifiability of the system’s utility. True enough, researches within these institutions also showed that EMR systems increase the quality of patient care as it decreases medical errors, however, the economic aspect regarding its use has not been well documented leaving most medical institutions adamant regarding its implementation. In lieu of this, the paper is divided into three parts. The first part will present the rationale behind the formation of the technology based medical paradigm. It will be formulated within the parameters of Thomas Kuhn’s conception of scientific revolutions. The second part present a discussion of the various EMR components and the problems encountered in its implementation at Kaiser. The last part, on the other hand, will concentrate on presenting possible solutions to the problems evident in the utilization of the EMR systems within the Kaiser program while giving specific emphasis on the role of the agent in successful implementation. Thomas Kuhn, in his work entitled The Structure of Scientific Revolutions, discusses the very nature and necessity of what he calls scientific revolutions. In this particular work, Kuhn sees an apparent parallelism between political revolutions on the one hand, and scientific revolutions on the other. Kuhn writes: â€Å"scientific revolutions†¦ (are) those non-cumulative developmental episodes in which an older paradigm is replaced in whole or in part by an incompatible new one† (2000, p. 50). On a preliminary note, paradigms are frameworks in and through which we approach phenomena, in general. They are models, so to speak. Naturally enough, different models employ different methodologies, different methodologies in turn, generate different types of knowledge, which, consequently, have different criteria of proof or validity. Scientific development, as Kuhn contends, may appropriately be characterized by paradigm shifts and this he calls scientific revolutions. It is important to note that scientific developments do not occur in a vacuum. For the aforementioned reason, there is a felt need to situate scientific developments in the historical context within which they are conceived, proposed and ultimately, institutionalized and integrated as part of society’s shared knowledge. This is to say that scientific revolutions are also proper objects of historical analysis and discourse in as much as political revolutions are. Kuhn contends that there is a parallelism between political and scientific revolutions. As pointed out earlier, it is important to note that he characterizes scientific revolutions as â€Å"those non-cumulative developmental episodes in which an older paradigm is replaced in whole or in part by an incompatible new one. † Kuhn’s characterization emphasizes two important points. First, â€Å"that there is a replacement of an old paradigm by a new one†. Second, â€Å"that the new paradigm is not merely something new; it is also incompatible with the old paradigm†. This is to say that the incompatibility or the irreconcilability of the new paradigm with the old paradigm serves as warrant for the necessity of such a revolution. Although there are significant differences in both scientific and political developments, Kuhn argues that one may be justified in using the notion of revolution as a metaphor for understanding them. He writes: Political revolutions are inaugurated by a growing sense, often restricted to a segment of the political community, that existing institutions have ceased adequately to meet the problems posed by the environment that they have in part created. In much the same way, scientific revolutions are inaugurated by a growing sense, again often restricted to a narrow subdivision of the scientific community that an existing paradigm has ceased to function adequately in the exploration of an aspect of nature to which that paradigm itself had previously led the way. (2000, p. 150) Kuhn’s parallelism is thus, founded on the idea that in both cases, a sense of malfunction (in our institutions as for the case of the political, and in our paradigms as for the case of the scientific) necessitates for the occurrence of a revolution. In relation to this, the shift from a human memory based paradigm to the technological paradigm may be likened to a revolutionary development within the field of medical data acquisition and retention. The difference between the human memory based paradigm as opposed to the technological paradigm stems from the ascription of greater subjectivity in relation to human memory based data as opposed to technologically maintained data. As was stated in the first part of the paper, the heterogeneous characteristic of medical institutions stems from the existence of various separate holistic systems within it. As a result of this, deriving and correlating clinical information becomes tedious. The main reason for this stems from human memory based paradigm’s utilization of paper based records which has a high probability of non-viability and unreliability. Examples of this are evident in evidence-based medicine’s non-adherence to the traditional methods of training and practice. Second, paper based records fall short of their original expectations. The objective of the healthcare record is â€Å"to identify problems and to understand the impact of the illness on the individual† thereby enabling the â€Å"amelioration of the problem to the patient’s satisfaction, within the bounds of medical capabilities and society’s resource limitations†(Simpson and Robinson, 2002, p. 115). The main limitation of the paper bound records, therefore, stem from their inability of being multiply accessible to members of society. On the other hand, Scott related the reasons for the development of a technology based paradigm with the high verifiability of the positive results of technologically determined medical care processes. According to Scott, â€Å"new technologies make it possible to evaluate and intervene to improve care in ways not heretofore possible† (2002, p. 2). In line with this, members of both the public and private sector lobby for the accessibility of technological improvements. For the members of the private sector, this is due to the inclusion of the medical industry within the business sphere. For the members of the public sector, on the hand, demands for greater accountability for health care stems from the prevailing belief that technological advancements must be made accessible to the general public. According to the IOM, information technology’s role in the substantial improvement of the redesign of the healthcare system is important since it ensures the formation of â€Å"a strong infrastructure in supporting efforts to reengineer care processes†¦ oordinate patient care across clinicians and settings and overtime, support multidisciplinary team functioning, and facilitate performance and outcome measurement for improvement and accountability† (qtd in Scott, 2002, p. 4). The results of the success of the EMR are traceable to the developments within the field of e-Health. According to Silber, EMR serves as the fundamental building bl ock for the development of various applications such as the use of ICT by the Primary Health Care Team. Others involve the use of ERM for validation of research or as an instrument in Continuing Medical Education. Information necessary for the functions ascribed above, in relation to the personal health record, are possible since the health record’s functionality enables the inclusion of the following: practitioner order entry, electronic patient record, document management, clinical decision support, administrative data, integrated communication support, as well as access to knowledge and resources. According to Raymonds and Dolds, the functions of each component are as follows. The electronic patient record presents the patient’s history. Document management, on the other hand contains the actions undertaken in relation to the patient’s diagnosis. Clinical decision support as compared to the later contains â€Å"the alerts based on current data from the electronic medical record, evidence based practical guidelines or more complex artificial intelligence systems for diagnostic support†. Access to administrative related information such as admission and discharge are contained within the section encompassing administrative data. Integrated communication support however provides the tools for the facilitation of effective and efficient communication amongst members of the patient’s health team. The last part enables access to other sources of information regarding the patient’s condition (Scott, 2007, p. 4). The Kaiser Permanente EMR implementation presented one of the main problems in relation to the utilization of the components of the technologically based paradigm. It was recognized that the problems arose due to several factors which range from the software’s lack of efficiency up to the non adherence of specific qualities of the program with the social conditions in the region as well as the team’s lack of background in relation to the efficiency the program necessitates with regards to the division of the work flow as well as its dependence upon all the players within the medical institutions that the program was implemented. Scott however stated that what should be given credence with regards to the above failed project is not so much as the failure of the program but the possibilities it opened in relation to the creation and implementation of new EMR programs in the future. Scott states, â€Å"success and failure are socially negotiable judgments, not static categories† (2007, p. 43). Hence if such is the case it is thereby possible to conceive of the problems noted by Hartswood et al (2003) in relation to the user-led characteristic of EMR. The social negotiability of judgments thereby ensures the possibility of reversals in judgments as soon as occasions arise wherein a perceived failure may be reconnected with an overall success. In line with this, the continuous developments within the various EMR systems produced and implemented within the country ensures the viability and possibility of a near success and perfection within the system which in a sense also ensures the possibility of another scientific revolution in the near future whose scope may extend beyond that of the technological sphere.

Friday, January 10, 2020

Research Paper Taxation Essay

Wage is the fixed amount of compensation for service rendered covering a fixed period of time, usually hours, or fixed amount of work. It is usually a compensation given to skilled and unskilled labor. Commission is usually a wage given to skilled and unskilled labor. Commission is usually a wage given to a salesperson based on the amount of his sales. This amount is usually added to basic salary. Bonus is given to simulate employees to work more efficiently and effectively (Valencia & Roxas, 2009) To make sure that employees comply with BIR regulation and local government laws, companies must include crucial employee and company information in their payroll systems. Setting up and running the different components that comprise a payroll system requires due diligence and adequate knowledge of tax legislation. Employee’s benefits In Philippine Accounting Standards (PAS) 19, paragraph 7 states that employees benefits are all forms of consideration given by an entity in exchange of services rendered by employees. These benefits may be paid directly to the employee’s or to their dependents, such as their children or spouses. These can be settled by payment in cash in form goods and services. Paragraph 4 of PSAS 19 enumerates the following four classes: (a) short term employee benefits; (b) post-employment benefits; (c) other term employee benefits; and, (d) termination. Employee information During the new hire process, companies must collect information such as medical insurance and W-2 forms to determine what should be deducted from an employee’s paycheck. These forms also provide employers which crucial information, such as the employee’s Social Security number and their withholding amount for government tax purposes. The systems must also track and process changes made to the employee’s tax exemption status, pensions, insurance plans or retirement funds. Salary information As part of the new hire process, payroll systems include a component that designates which employees are full time, part time and contractors. Classifying worker in a payroll system is important since the government levies high penalties on companies that categorize employees incorrectly. Applicable taxes and deductions The National internal Revenue Code (R. A 8424) requires the employer to withhold portion of the salaries earned by employees that will at least approximate an income tax due of the earner relative to the income earned. The monthly or semi-monthly withholding s taxable could be obtained from the BIR to serve as guide as to what amount to be withheld from the salary of the employee (http://www. ehow. com/list_6725482_components-payroll-system. html, 17 July 2010). In preparing a payroll, certain government mandated contributions needed to be deducted from the gross play of each employee. These include withholding taxes, PAG-IBIG, SSS (Social Security System) and PhilHealth contributions. Withholding taxes is remitted to BIR while PAG-IBIG is remitted to Home Development and Mutual Fund (HDMF) (Cabrera, Ledesma & Lupisan, 2009). Other payroll withholdings include employee contributions to benefits, retirement accounts, and charities, these are determined by the employee during the fringe benefits selection process offered by their employer and must be taken into account as well as any employer matches when reporting payroll . Methods of Payroll Computation A payroll system involves everything that has to do with the payment of employees and the filing of employment taxes. This includes keeping track of hours, calculating wages, withholding taxes and other deductions, thus appropriate methods must be applied in the computation to achieve a desirable output. More and more aspects of payroll are being handled electronically. Methods include direct paycheck deposit, debit cards, payroll and non-payroll, use Web-based information system to allow employees access, with a secure password, to their individual payroll records including pay stubs, an earnings record and in some cases, employer information, such as the company manual or health insurance plan overview (Banning, 2008) Giove (1993) stated the seven methods for computing payroll: Hourly Rate Plan Employees paid on an hourly rate plan receive a fixed amount for each hour they work. An employee’s regular earnings are equal to the employee’s hourly rate multiplied by the number of hours worked during the payroll period. Salary Plan Salaried employees receive a fixed amount for each payroll period, whether weekly, biweekly, semimonthly, or monthly. If an employee on the salary plan works less than the regular hours during a payroll period, the employer may deduct for the time lost, although in most cases the employer does not make such a deduction. Regular earnings would be determined by multiplying that hourly rate by the actual number of hours the employee worked during the payroll period. Overtime Pay All employees in all establishments and undertakings whether for profit or not are entitled to overtime pay for work rendered beyond eight (8) hours. But this does not apply to managerial employees, field personnel, and members of the family of the employer who are dependent on him for support, domestic helpers, person in the personal service of another, and workers who are paid by results. Employees in the government are also entitled to overtime pay but they are governed by Civil Service laws and rules. Only employees in the private sector are covered by the Labor Code. Guaranteed Wage It is a written agreement to pay an employee a guaranteed minimum amount regardless of the hours worked, with an extra half-hour premium for hours over 40. Piece – Rate Plan It is a compensation plan whereby employee earnings depend on the units produced. Commission Plan Sales commission plans vary greatly from company to company but are generally based on the sales made during payroll period. Combination Plan This is a compensation method whereby employees receive a fixed amount of salary for each payroll period plus an extra amount for production (piece-work) or sales (commission). Timekeeping Records Accurate timekeeping is an essential part of an efficient payroll system. Every business must have an orderly method of recording the hours employees worked during the payroll period. The time records show the date and the time the workweek starts, the number of hours worked each day, and the total hours worked during the week. Time records are filed after the payroll is prepared and, in accordance with the requirements of the law, retained up to three years. The most common methods of timekeeping use a time clock with timecards or a time sheet. There are two primary reasons to maintain accurate payroll records. First, is the collection of the data necessary to compute the compensation for each employee for each payroll period. Second, provision of information needed to complete the various government report-federal and state- required of all employees. All business enterprise both large and small are required by law to withhold certain amounts from employees’ pay for taxes, to make payment to government agencies by specific deadlines, and submit reports on official forms. (McQuaig & Bille, 2008). Other Aspects of Payroll Accounting System Payroll Register The payroll register summarizes employee earnings and deduction information in a journal entry that is inserted into the general ledger for accounting and general research purposes. Payroll registers are also used to create tax report. These documents are prepared by payroll staff or generated using payroll computer system. Payroll Services The meteoric success of payroll services is not accidental, but rather a reflection of the business community’s willingness to outsource the tedious and complex task of payroll accounting to outside specialists. The upside of outsourcing payroll is that payroll services ensure that the company complies with laws pertaining to payroll. That is a big deal considering the time investment it would take the payroll officer to stay current on payroll-related legislation. Another big plus is that payroll services are responsible for keeping track of each employee’s accumulated earning, tax withholding, and other information needed to issue W-2 forms at the end of the year. They also stay on top of things like direct deposits, salary adjustment, quarterly tax payments and all of the other details that can be distraction from the important job of leading the company (http://Gaebler. com/payroll-services, 8 Aug, 2010). In-house Payroll If contracting a payroll service does not sound like a good fit for a business, the management also has the option of doing it in-house. But if the management plans on saving money by personally administering the payroll, having more alternatives will be a better idea. Even if the company only has a few employees, dealing with payroll-related details can be a waste of time. Instead, designating the job to an employee who can give it the time it requires so precious time can be dedicated to other things (http://Gaebler. com/in-house-payroll, 8 Aug, 2010). Whoever ends up doing payroll in the company will be happy to know that there is a lot of software out there to help them. In fact, most accounting software solutions have payroll modules. Start by assessing the capability of their current accounting software program. If it does not have a built-in payroll function, chances are it is available from the manufacturer as an add-on. If it is not, then the company needed to decide whether to change accounting to one that does or attempt to find a payroll program that is compatible with the current system. Either way, it is worth the time to find a computerized system that meets the company’s needs rather than trying to do it the old-fashioned way. Internal Control A district’s accounting and payroll functions are critical for the maintenance of a solid financial foundation. Accurate and timely financial reports are crucial to administration and board decision-making. Payroll must be accurate, as it represents the district’s largest budgeted expenditure. Internal controls must safeguard the district’s assets from misappropriation. Payroll processing is an error prone activity. If organizations have just one or two employees it may seem relatively easy to compute salaries outstanding, taxes etc, but as small business starts adding employees they find spending more and more time in computation of salaries including variable pay. Errors are common in the full and final settlement and increases when employees join in the middle of a term as the processes are manual (http://ezinearticles. om/? expert=Mikael Anderson, 4 Aug, 2010). Waterhouse (2010) said in one of his studies that the objective of internal controls for payroll is to ensure that payroll disbursements are properly recorded and that related legal requirements (such as payroll tax deposits) are complied with. Segregation of duties is an effective internal control. The bank reconciliation clerk reconciles the bank accounts and is not involved in processing or approving items for payment. A payroll administrator, supervisor, specialist and six clerks perform the payroll function. The Human Resources Department (HRD) enters employee data into a database share by Personnel and Payroll and sets the rate of pay. The software system controls the ability of individuals to change information based on their access to the system. This prevents unauthorized individuals from changing this information (http://window. state. tx. us, 6 Aug, 2010). Gelinas, Sutton and Hunton (2005) included in their study some of the procedures that can be used to prevent or detect schemes. First is the direct deposit of payroll to eliminate alteration, forgery and theft of paper check. Second, is checking for duplicate names, addresses, and Social Security number in the employee data, finally is comparing actual to budgeted payroll. Expense Accounts are often an area of fraud and abuse. This include: (a) using legitimate documentation from personal expense for the business expenses; (b) overstating expenses by altering receipts; (c) submitting fictitious expenses by submitting copies of invoices. Such abuses can be minimized by formulating reasonable policies that compensate employees for their out-of-pocket expenses. Copies of invoices should only be accepted in extreme circumstances. Finally, expense account activities should be monitored on a regular basis to detect unusual patterns (Gelinas, Sutton & Hunton, 2005). Payroll Fraud Connection Payroll, similar to cash disbursements, is an area ripe with fraud potential. After all, large organizations will make thousands of payments to employees for payroll and expenses account reimbursement every payroll period. Firth (2006) expresses that Payroll Fraud is an important issue that needs to be addressed by both Finance and Payroll professionals. Some of the key activities need to be considered include: improving the quality of master file data, reviewing the end to end payroll process, and reviewing the people that are performing each step in the payroll function. It is worth remembering that improving each of these areas will not only reduce the risk of payroll fraud, it will also result in many other business improvements right across the organization. Here are some of the types of payroll frauds, along with the median loss for each to an employer: (a) Ghost Employee, employees do not actually work for the company but receives paychecks. These can be recently departed employees or made-up persons; (b) Falsified hours and salary, employees exaggerate the time that they work or are able to increase the salary in their employee date; (c) Commission Schemes, employees falsify the sales on which commissions are based or increase the commission rate in their employee date; and, (d) False worker’s compensation claims, employees fake injuries to collect disability payments (Gelinas, Sutton & Hunton 2005).

Thursday, January 2, 2020

Global Warming Is Caused By Human Beings - 1174 Words

Global Warming Global warming appears to be caused by human beings. There is too much CO2 in the atmosphere for plants and trees to take in all of it. There is strong evidence that humans are to blame, not just due to cars and factories but also from agriculture. A majority of scientists and scientific organizations believe humans are causing global warming. Global Warming is controversial. It is a perplexing phenomenon. Some people think it is a normal occurrence, others are afraid of the consequences and some say it is a myth1. However, sudden climate change start to be adversity. When we look at natural disasters that are not expected to occur, we have to ask what s happen to this world. When we see differing opinions of scientists and governments about it, it s can be confusing. â€Å"The potential threats are serious and actions are required to mitigate climate change risks and to adapt to deleterious climate change impacts that probably cannot be avoided.† (AC S, 2010). Also there are many organizations which support this phenomenon and have evidence for that like IPCC, NASA, etc. But the real argument is who caused global warming? There are two possibilities: either humans or it occurs naturally. There is evidence suggesting that the Earth s natural cycles include periods of global warming, but there is also evidence to suggesting that humans have contributed enough CO2 to the environment to cause global warming. Global warming is the warming made byShow MoreRelatedGlobal warming is being caused by humans, not the sun. What is global warming? Carbon dioxide and600 Words   |  3 Pages Global warming is being caused by humans, not the sun. What is global warming? Carbon dioxide and other air pollution that is collecting in the atmosphere like a thickening blanket, trapping the suns heat and causing the planet to warm up. Coal-burning power plants are the largest U.S. source of carbon dioxide pollution -- they produce 2.5 billion t ons every year. Automobiles, the second largest source, create nearly 1.5 billion tons of CO2 annually. The planet is changing faster than expectedRead MoreIntergovernmental Panel on Climate Change1404 Words   |  6 Pagesabout 90% of the use of fossil fuels worldwide to have a slim chance of stopping Global Warming. If the people have anything less than the percentage given, Global Warming will not stop. As of now Global Warming is a big issue throughout the world. Some say Global Warming is just a myth to scare people and it’s just a natural cause, but there is proof that Global Warming is a fact and that the main cause of Global Warming is anthropogenic causes or man-made. Man has overused the burning of fossilRead MoreSave the Earth and Save Life Essay1102 Words   |  5 Pages Gabrielle and Sir David King). Global warming is affecting not only polar bears but also many other species that are going extinct. Habitats are destroyed and islands are invaded by the sea water, slowly sinking. Every living thing and environment on this earth is affected by global warming. However, many decide to ignore such facts and choose to believe that global warming will positively effects the earth, rather than destroying the it. Although global warming has been ignored by many people,Read MoreGlobal Warming Is An Issue That Scientists And All People Should Be Concerned With The Environment1548 Words   |  7 PagesMany folks have heard of global warming which is also known as the Greenhouse effect, but don’t know if it’s real or not, well the answer is that it is real and hopefully by the end of reading this people will understand why. There are numerous thoughts about whether global warming is truly an issue that scientists and all people should be concerned with. Some people feel that climate change is not a threat at all, but at the same time others feel that global warming is a huge threat to people andRead MoreGlobal Warming And Its Effect s On The Environment Essay1516 Words   |  7 PagesSome people say global warming is caused by human activity, others say global warming doesn’t even exist. Some people claim that the climate is changing for the worse. They believe that humans are the primary cause of these changes, especially the increase in temperature, caused by the burning of fossil fuels. They believe that the temperature changes are causing glaciers to melt. They claim that the melting of ice masses leads to a higher sea level and worsening conditions for Arctic animals, asRead MoreGlobal Climate Change and Human Activity Essay1152 Words   |  5 Pagescauses, and human activities being the main cause to the negative changes in the global climate. Natural causes like volcanic eruptions, the changes in the sun’s radiation, and the ocean current shifts noticed are contributing to the global climate change. In addition, the human activities such as the burning fossil fuels, and the cutting down of trees [forests] so as to create land to cultivate and rare cattle affect the climate change. The human activities that are done affect the global climate thatRead M oreTaking a Look at Global Warming1562 Words   |  6 PagesIs global warming being accelerated as a result of human interaction? This question has been asked over and over again since global warming was first noticed and brought to our attention. Multiple claims have been made that this anomaly is caused by human interference with the planet. There are also those that strongly deny these accusations. To detect the truth, one must explore both sides of the story. Studies have been done by countless experts to support their side of the story; thereforeRead MoreGlobal Warming Position Paper985 Words   |  4 Pages Although some believe that Global Warming has been created due to manmade pollutants, I believe that Global Warming is a natural process that has been accelerated due to the excess emissions of pollutants from nature and manmade devices into the atmosphere. The world has been said to be on a cycle of global warming and cooling, this process can neither be stop ped nor prevented, but it can be accelerated with the addition of non-natural emissions from automobiles and factories; because thisRead MoreThe Debate Over Global Warming1499 Words   |  6 PagesThe global warming debate has been at the top of the list for environmentalists increasingly over the last twenty years. The controversy of global warming is either considered due to human activity or natural causes. Although the earth’s climate and temperatures have changed, that does not mean it is humanly caused. Despite the pretense linking the association between man and global warming, which is heavily supported by consensus of scientists, eco-sensitive politicians, and the effort to restrictRead MoreCauses Of Environmental Issues1467 Words   |  6 Pagescare. However, the global issues around the world are the most serious and concerning problems. According to Globe scan, 64% out of 25,000 people said environmental issue is the most serious problem. Like other issues, envi ronmental issue is a growing problem around the whole world. Some people know and say that humans have to save the Earth, but how many of them actually keep those rules to save it? For that reason, environmental issue is an ongoing problem, and global warming is the huge and significant