Thursday, October 31, 2019

Argue in support of the statement that the English constitution was Essay

Argue in support of the statement that the English constitution was first written in 1647, during the Putney debates - Essay Example Looking back into more immediate history, the growth of representative democracy can ultimately traces its roots to the way in which constitutions and other such binding restrictions have defined the role and relationship that a government must play towards its citizens/stakeholders.1 As a function of this particular understanding, the following analysis will be concentric upon engaging the reader with an understanding and discussion for how the Putney Debates served as a formative and historical precedent for the way in which the constrained power of the state was understood and affected. Such an understanding evolves to engage the reader with the understanding that the English constitution was in fact first written during these debates. Beyond merely engaging the reader with a further understanding of the historical importance of the debates, it is the hope of this author that such a discussion can further underscore a level of historical understanding and key trends that culminate d in a more realistic understanding of the manner through which subject and governed should interact with one another. The Putney Debates were unlikely to be considered as a formulary of a Constitution at the time in which they were held. However, in retrospect, the issues that the individual were wrestling with could only be understood in terms of the way in which a defining document, set of rules, and/or Constitution could address the contentious issues at hand. The core arguments that were taking place between the â€Å"radicals† and the monarchists was with respect to the role of the individual and the way in which such an individual should have a level of power and determinism with respect to the state. Naturally, this very question has been one which has contributed greatly to the way in which governments have interacted with their people since

Tuesday, October 29, 2019

Women and Crime Annotated Bibliography Example | Topics and Well Written Essays - 250 words

Women and Crime - Annotated Bibliography Example The journal discusses in details the various roles of women. It is very difficult for a woman to face the world of crime and help eradicate crime from the society. Women must mould her to become very strong so that she can face the offenders and take proper action against them. The journal is a very resourceful source of knowledge as it broadens our view on the roles of women and how efficiently the women can play the role as a police officer. Australasian Council of Women and Policing. (1999). The Journal for women and policing. Melbourne: Austral Media Group P/L. This scholarly journal refers to women and policing. It extensively discusses the role of women in the field of police. The journal discusses in detail how the women contribute in protecting the society and preventing crime. It also discusses the problems that the women have to face while working as a police officer in a professional work environment. Women have to face a lot many difficulties and hardships while working in a professional environment and facing offenders and people committing crime.

Sunday, October 27, 2019

What It Means To Be Human Religion Essay

What It Means To Be Human Religion Essay [1st]First of all, I would like to emphasize the theological and academic depth of Dr Mark Elliotts paper. I would also like to acknowledge the initiative of the Ecumenical Institute at Bossey (especially of its director, Professor Fr Ioan Sauca) in organizing this dialogue between Evangelicals and Orthodox which enables members of each tradition from different national contexts to meet and explore areas of convergence on major Christian themes. Such discussion between the two different traditions might bring to light common points of doctrine and bring them closer to one another. The Orthodox need to draw nearer to the Evangelicals, and the Evangelicals need to see the Orthodox world with more confidence. The time has already come for us to be no longer divided. [bod]The theme of this years seminar, What it means to be Human, will enable Orthodox and Evangelicals to explore, compare and contrast their understandings of what it means to be human (theological anthropology), and to reflect on how the extent of convergence in this area might bring us closer together theologically and facilitate our joint practical action. [hed]Convergence and Divergence [hed1]1. The four distinctive characteristics [1st]In the first part of his paper, Dr Elliott outlines present day Evangelicalism both from a historical and a doctrinal perspective. I have noted the four distinctive characteristics of Evangelicalism those of conversionism, Biblicism, crucicentrism and activism, to which the Evangelical Alliance has added that of Christocentrism, for [as Dr Elliott says] it is hard to imagine any Christian movement or denomination that would not call itself Christocentric. [bod]The first four characteristics are also found in the Orthodox doctrinal framework in a more or less nuanced form. However, the Orthodox might have some problems with the fifth element Christocentrism in the sense in which the author has mentioned it. The author writes, I I think Evangelicals do have a particular way of understanding the incarnation as being less about the assumption of humanity than as the activity of the God-man individual, who is more a substitute than a representative. For the Orthodox, Jesus Christ, through his incarnation, has assumed our humanity in his divine hypostasis. In Jesus Christ our human nature has received its real existence, not as being its own centre but in a pre-existent centre, namely in the unity of the divine hypostasis of the Logos. Through his incarnation the hypostasis of the divine Logos did not unite with another human hypostasis; rather, he assumed human nature in his eternal divine hypostasis, becoming, by means of this event, the hypostasis of our own human nature. Hence, through his incarnation, Jesus Christ as Son of God became united in a supreme manner with our humanity. In other words, he came into the closest possible proximity with us. This process is a consequence of the hypostatic union. That is why he is called God-Man. [bod]From this point of view, our humanity has been healed from all the effects and consequences of the original sin by Jesuss sacrifice and resurrection. It is important to emphasize in this context that the sacrifice of Jesus was directed not only towards his Father but also towards his own human nature and, implicitly, towards us human beings. Through his sacrifice offered to God, Jesus Christ is made perfect as a human being, sanctifying or perfecting other human beings through this. The author of the Epistle to the Hebrews writes clearly on this matter: and having been made perfect, he became the source of eternal salvation for all who obey him (Heb. 5:9). Or For if the blood of goats and bulls, with the sprinkling of the ashes of a heifer, sanctifies those who have been defiled so that their flesh is purified, how much more will the blood of Christ, who through the eternal Spirit offered himself without blemish to God, purify our conscience from dead works to worship the living God! (Heb. 9:13-14). The same author says further: it is by Gods will that we have been sanctified through the offering of the body of Jesus Christ once for all à ¢Ã¢â€š ¬Ã‚ ¦ For by a single offering he has perfected for all time those who are being sanctified (Heb. 10:10,14). Christ has become through his cross and resurrection the first-fruits of those who have fallen asleep (1 Cor. 15:20). Therefore, he is not a substitute for humankind, but the one who fully assumed and fulfilled it. From this perspective, we as human beings do not remain external to the incarnation, but are truly present in it. [hed1]2. Atonement: one of the three issues in Evangelical theological anthropology [1st]For the Orthodox, Christs sacrifice and his death on the cross are not understood as penal substitutionary atonement. From this point of view, statements like sin incurs divine wrath and judgmentà ¢Ã¢â€š ¬Ã¢â‚¬ ¦ and on the cross, Jesus sacrificially atoned for sin by dying in our place and paying the price of such sin are problematic for our soteriological doctrine. [bod]The Orthodox understand Jesuss death on the cross as being more a healing of the human nature disfigured by sin, and not as a price that Jesus had to pay in our place in order to satisfy God, whose honour is offended by our sin. In view of the fact that we have been created as an overflow of Gods love, our sin has caused him more sadness than offence. The concept of a substitutionary sacrifice by means of which the offended honour of God was re-established, has more to do with a so-called juridical act (sin-punishment-redemption) than with one which would express the divine love or sympathy. In this respect, the Orthodox might also have a problem with the concept of inherited guilt. Although Paul seems to be quite clear in this respect Therefore, just as sin came into the world through one man, and death came through sin, and so death spread to all because all have sinned (Rom. 5:12) I think this may cause difficulties for the Orthodox. In my own opinion, Paul is talking in th is context about the consequences of sin rather than inherited guilt resulting from Adams sin. According to the theology of the church fathers, we consider the cross as the way to resurrection. From this point of view, Orthodox theology is more resurrectional than Evangelicalism, although this does not mean that the Orthodox put less emphasis on the sacrifice of Christ than on his resurrection. In Orthodox worship, the veneration of the cross is not separated from the praise of the resurrection. This is wonderfully illustrated in a liturgical hymn: We worship your Cross, Jesus Christ, and your holy Resurrection we praise and honour. When considering the difference between a Calvinian and a Grotian understanding of the cross, the Orthodox may ask, are the Evangelicals more Calvinian or Grotian? Dr Elliott points out that Calvin sees God as being pleased because his Son as man obeyed him. For Grotius, God is above any such sense of being offended. The anthropological premise is that humans are taken seriously by God, but what does this mean? From this point of view, the Orthodox are closer to the doctrinal position of Grotius than to that of Calvin. My question is further justified by the following point made by Dr Elliott: To be honest, those who espouse a view that God the Father did not send his Son to the cross with a view to his bearing a penalty are arguably those who see the cross as one doctrine among others, and perhaps are not crucicentric enough to be traditionally Evangelical. [hed1]3. The Authority and Power of the Bible and The Uniqueness and Universality of Christ two theological issues highlighted by the Lausanne Covenant, 1974 [1st]The concept of mission, based on the authority and power of the Bible and the uniqueness and universality of Christ, may be a point of convergence between the two traditions. Nevertheless, there is a tendency for the Orthodox to put more emphasis on the liturgical reading of the Bible than on the teaching and preaching of it. As Professor John Breck has said, [shortquote]à ¢Ã¢â€š ¬Ã‚ ¦ however important the place of the Bible may be in both personal and liturgical usage, for many Orthodox that place is purely formal. They respect and venerate the Scriptures, they recognize many familiar passages, particularly from the Sunday Gospel readings, and they insist that theirs is a biblical Church. Nevertheless, only a small minority seeks daily nourishment from Bible reading. à ¢Ã¢â€š ¬Ã‚ ¦ we Orthodox have all too often neglected or even abandoned our patristic heritage which placed primary emphasis on the preaching of Gods Word. [1st]In this sense, the frequently made comment that the Orthodox kiss the Bible and dont read it is not entirely unjustified. [hed1]4. The true image of God [1st]The idea that human beings are created by and in Christ as the true image of God (Heb. 1:1-4) with the hope of a blessed and immortal life is a point of convergence between our traditions. Yet church fathers do not speak only of the image of God, but also and to an equal extent of the resemblance (likeness) to God. In this sense St John of Damascus says, the phrase according to the image means the reason and freedom, whereas according to the resemblance means likenessà ¢Ã¢â€š ¬Ã¢â‚¬ ¦. He continues, the image is developed into likeness through the practice of virtues. Therefore, the image of God is something which is given to us and the resemblance is something that we have to achieve. It is only in this sense that we might accept what Paul Evdokimov said: An image without resemblance is one reduced to passivity. But even in a passive state, the realization that we are made in the image of God remains eikona tou Theou. This reality is beautifully expressed in the words of th e Orthodox funeral service: I am the image of your ineffable glory, though I bear the marks of my transgressions. [bod]Since the man is created not only from dust but also through Gods breath of life it becomes obvious that he has a special relationship with the nature from which he is formed, and also with God his Creator. As St Gregory of Nazianzus affirms, Since from dust I have been created, I belong to the earthly life; but being also a small divine part, I also carry in my life the desire for eternal life. Therefore, because he is made in the image of God, man is rooted and anchored in eternity. But being the image of God refers not only to the soul but also to the body. St Gregory Palamas sees the image as relating to the whole human being: The name man does not refer to the soul or the body in a separate way, but to both at the same time because they were created together according to the image of God. [hed1]5. The weakness of the will and the grace of God [1st]We as Orthodox fully agree that after Adams sin, human will remained very weak. But in spite of this fact, human beings still have the freedom to choose for God. This was specially emphasized by the Patriarch Jeremias during a dialogue with the Lutherans around 1580, as Dr Elliott comments: humans preserved the ability to choose for God freedom as the possibility of choice. According to the Orthodox perspective, grace does not force or limit the human will and its freedom. That is, grace does not work in or for human beings in an irresistible way, forcing them to receive grace in order to be saved without their collaboration. The reason why not all human beings are saved is not because this is predestined by God, with some people being saved and others being lost, but, rather, because of a lack of response by some people to grace. The grace of God does not force anyone to pursue actions independently of their will. That is why the Orthodox refuse to accept the concept of absolu te predestination. [bod]Nowadays, we hear more and more voices among Orthodox in favour of a relative predestination, in the sense that God desires everyone to be saved and to come to the knowledge of the truth (1 Tim. 2:4). This relative predestination is shown in Gods will for every human being to be saved. This understanding of predestination sees it as conditioned by Gods foreknowledge of peoples collaboration or otherwise with divine grace: those whom he foreknew he also predestined to be conformed to the image of his Son (Rom. 8:29). If God predestined the salvation of only some human beings, that would contradict his eternal love as manifested in the incarnation and the cross of his Son, and also the ontological-universal value of the Christs sacrifice on the cross. From this point of view, Elliotts comment that for the sixteenth-century Lutherans in dialogue the point of salvation was to have ones independent centre of decision-making removed, to be replaced with total dependence on God sounds quite strange to the Orthodox. [hed1]6. Sanctification and justification [1st]Are we wholly sanctified when we are justified? And when does this process happen? At conversion or at baptism? From Dr Elliotts paper we may conclude that there is a lack of consensus among the Evangelicals in this respect. Is human sinful nature totally destroyed? Are the roots of pride, self-will, anger and love of the world totally removed from the heart, as John Wesley claimed? These are questions that need to be addressed. [bod]From an Orthodox perspective, conversion is the simple act of affirmation of a decision with regard to justification. The process of becoming holy begins with the sacrament of baptism. However, the fulfilment of holiness is obtained only at the end of a constant battle with sin and the continual practice of virtue. From this point of view, the Orthodox see two stages towards true holiness: sacramental holiness, obtained temporarily through baptism, and moral holiness, understood as a final stage to be reached. In this final stage, holiness corresponds to a stage which in patristic tradition and spirituality is called theosis. We will return to this issue again at the end of this paper. The Orthodox perspective on the sinful nature of human beings affirms that after the fall, the image of God was not totally lost and human knowledge was not entirely reduced to a dark and opaque understanding of the world. Human beings can partially penetrate this opacity by means of another way of knowing, namely that which arises from virtue. The marring of the image of God (darkening of reason, corruption of the heart, weakening of the will) in human beings does not mean its destruction or abolition, for none of the human spiritual functions were completely destroyed through original sin. Original sin has only obscured the image of God in human beings, not destroyed it. The tendency and the capacity of human beings to know and to want to do what is good have also survived the fall, but obviously in a weakened state. Fallen human beings are also able to achieve virtue and overcome temptation if you do what is right (cf. Gen. 4:7); fallen human beings can reject death, choosing lif e See, I have set before you today life and prosperity, death and adversity à ¢Ã¢â€š ¬Ã‚ ¦ Choose life, so that you and your descendants may live (Deut. 30:15,19). Therefore Orthodox do not see human beings as totally fallen, nor the image of God in humanity as totally destroyed. [hed1]7. The anthropology of revivalism [1st]I do not know to what extent Evangelicals accept the ideas of Charles Finney, presented in Dr Elliotts paper and summarized below. But some of these could, with certain qualifications, be shared by the Orthodox. For instance: [list]- Preach the reality of hell, not of sin. Being filled by the Spirit is vital since, in the spiritual battle, attack is the best form of defence. One must give the heart to God and submit to him. Repentance is a change of mind, as regards God and towards sin. It is not only a change of view, but a change of the ultimate preference or choice of the soul. It is a voluntary change and by consequence involves a change of feeling and of action toward God and toward sin. These words may be accepted by the Orthodox as being a clear definition of the meaning of repentance. Humans have responsibility to repent and believers should not pray that God would help them to do that, for the sinner has to provide the will and disposition. However, these words leave no space for synergism, understood as cooperation between God and human beings concerning the process of their renewal. [1st]The Pauline statement in 2 Corinthians 4:16 is very important in relation to the process of human renewal. So we do not lose heart. Even though our outer nature is perishing, our inner nature is being renewed day by day. According to the Orthodox, our salvation includes: [list]a passage from death to life, from darkness to light (John 3:1-6; Col. 1:13-14), through repentance, faith and baptism I have been saved. a process of spiritual growth and maturation (2 Pet. 1:2-8) through ongoing repentance, faith and communion, often called deification I am being saved. Paul writes of our inner life being renewed day by day. a promise of eternal life (2 Cor. 5:9-11; John 14:1-6), calling us to perseverance and righteousness I shall be saved. [hed1]8. The relation between soul and body [1st]What is the soul? Answers such as the body is the image of God by association with the soul and soul and body are aspects of the human existence, quoted by Dr Elliott, may be seen as convergence points between the anthropologies of our two traditions. Therefore, the human beings uniqueness consists in the close relation between spirit and soma. Salvation is for the whole human being soma and soul. Similarly, the final act of universal judgment applies to the whole human being. Our anthropology is therefore understood only through the eschatological event. That is why the body will be raised again in order to be judged by the Creator, together with the soul with which it has formed a unity during its earthly life. From this point of view, the death of the body does not mean its destruction, but the passageway towards a new existence. [bod]In view of the fact that Paul says your life is hidden with Christ in God (Col. 3:3), we may conclude that our humanity is a great mystery. Some of the Evangelicals appear to disagree with this. [hed1]9. Deification (Theosis) [1st]Dr Elliott argues that Evangelicals may have problems with deification. Why should this be so? If we understand deification in the sense in which R. J. Bauckham and other theologians apparently did, as quoted in the paper, namely that humans become divine as God is divine, such a thought is obviously unthinkable for any Christian theologian, Evangelical or Orthodox. From an Orthodox point of view, deification is more than being in the image of God or being adopted as Gods children. Being renewed by Gods grace, we become partakers of the divine nature: Thus he has given us, through these things, his precious and very great promises, so that through them you may escape from the corruption that is in the world because of lust, and may become participants in the divine nature (2 Pet. 1:4). This does not mean that we become divine by nature. If we participated in Gods essence or nature, the distinction between God and humans would be abolished. What this means is that we participate in Gods divine grace, described in scripture in a number of ways, such as glory, love, virtue and power. We are to become like God by his grace, and truly his adopted children, but we never become God by nature. For we are human, always have been human, and always will be human. We cannot take on the nature of God. [bod]Divinization, in the definitive form which the fathers gave it, looks towards a single goal. That is the goal of assuring man that the quest for the authentic person (not as a mask or as a tragic figure) is not mythical or nostalgic but a historical reality. Jesus Christ does not justify the title of Saviour because he brings the world a sublimely beautiful revelation of personhood, but because he realizes in history the very reality of the person and makes it the basis and hypostasis of the person for every man. According to some church fathers, this transformation occurs especially through the eucharist, for when Christs body and blood become one with ours, we become Christ-bearers and partakers of the divine nature. St John of Damascus, writing in the eighth century, makes a remarkable observation. The word God in the scriptures refers not to the divine nature or essence, for that is unknowable. God refers rather to the divine energies the power and grace of God which we can perceive in this world. The Greek word for God, Theos, comes from a verb meaning to run, to see or to burn. These are energy words, not essence words. In John 10:34 Jesus, quoting Psalm 82:6, repeats the statement, You are gods. The fact that he was speaking to a group of religious leaders who were accusing him of blasphemy allows, in my opinion, for the following interpretation: Jesus is not using the term god to refer to the divine nature. We are gods in that we bear his image, not his nature. Deification means that we are to become more like God through his grace, that is through his divine energies. The process of our being renewed in Gods image and likeness (Gen. 1:26) began when the Son of God assumed our humanity in the womb of the blessed Virgin Mary. Thus, those who are joined with Christ through faith in holy baptism enter into a re-creation process, being renewed in Gods image and likeness. Based on the earlier Council of Chalcedon, as well as on the theology of Saint Maximus the Confessor (c.580-662), Palamas strenuously defended the churchs teaching that a direct, personal experience of God himself (theosis) was accessible through Gods energies made available through the hypostatic union of the two natures of Christ. The incarnate Word hypostasized human nature and acted in accordance with the divine and human wills. There was thus a sharing of attributes (communication idiomatum) whereby the humanity of Christ was penetrated by the divine energies and thereby deified. Those divine energies, which we partake of, were not understood as an impersonal something from God but as God himself because Christ is consubstantial (homoousios) with the Father. Through the incarnate Christ, God gives himself to us in such a living, personal way that the gift and the giver are one and the same. Historically, deification has often been illustrated by the sword and fire metaphor. A steel sword is thrust into a hot fire until the sword takes on a red glow. The energy of the fire penetrates the sword. The sword never becomes fire, but it picks up the properties of fire. By application, the divine energies penetrate the human nature of Christ. Being joined to Christ, our humanity is interpenetrated with the energies of God through Christs glorified flesh. Nourished by the body and blood of Christ, we partake of the grace of God his strength, his righteousness, his love and are enabled to serve him and glorify him. Thus we, being human, are being deified. Theosis means the transformation of being into true personhood in the person of Christ. The conclusion is that the ontology of personhood and communion which emerges from the understanding of the eucharist as a communion event in the body of Christ forms the basis for the understanding of the God-world relation, and more importantly, the patristic notion of energies. In this context, we can see that theosis is trinitarian through unity in the hypostasis of Christ. Theosis is, therefore, the ultimate goal toward which all people should strive, the blessed telos for which all things were made. It describes the ineffable descent of God to the ultimate limit of our fallen human condition, even unto death a descent of God which opens to men a path of ascent, the unlimited vistas of the union of created beings with the Divinity. Deification is a descriptive term for Gods redemptive activity towards human beings. When human beings respond to this activity, the ultimate transformation of a human being without losing personhood is made possible. It is a process that should be understood in a carefully qualified sense, as an ongoing process, going from one realm of glory to another (2 Cor. 3:18). Even when the term deification is not explicitly mentioned it is implicitly present as the content of the salvation proclaimed by the gospel. [hed]Conclusions [list]In terms of a definition of what it means to be human, we may assert the following: The human being is the image of God and at the same time is called to his resemblance (likeness). Jesus incarnation, cross and resurrection do not only make possible the salvation of human beings, but also herald the starting point (beginning) of their deification. The basis for the deification of human beings is found in Jesus Christs deified nature. An example of this reality can be found in John 20:19-20. Here we read of the resurrected Jesus appearing to his ten disciples. He enters the house and stands in their midst although the doors were shut. The justification and sanctification of human beings are two different processes with three distinct stages: [list2] I have been saved started in faith, repentance, baptism and Eucharist; b. I am being saved achieved by means of the life in Christ; c. I shall be saved continued in the process of deification in eternity. 5. The death of the body does not mean the dissolution of the human being, but it represents the entry into a new existence in Gods presence. From this perspective, human beings are immortal.

Friday, October 25, 2019

The Lord Of The Flies: Themes Essay -- essays research papers

The Lord of the Flies: Themes The world had witnessed the atrocities of World War II and began to examine the defects of their social ethics. Man's purity and innocence was gone. Man's ability to remain civilized was faltering. This change of attitude was extremely evident in the literature of the age. Writers, who through the use of clever symbolism, mocked the tragedy of man's fate. One such writer was William Golding. An author who has seen the destruction of war and despises its inevitable return. Through the use of innocent and untainted children, Golding illustrates how man is doomed by his own instinct. The novel is called Lord of the Flies, and is of extreme importance to help reconstruct the current wave of revolutionary ideas that swept the twentieth-century generation. Lord of the Flies portrays the belief of the age that man is in a constant struggle between darkness and light, the defects of human nature, and a philosophical pessimism that seals the fate of man. Golding's work are, due to their rigid structure and style, are interpreted in many different ways. Its unique style is different from the contemporary thought and therefor open for criticism. The struggle between darkness and light is a major theme in all the works of William Golding. Strong examples of this are found throughout Lord of the Flies. The most obvious is the struggle between Ralph and Jack. The characters themselves have been heavily influenced by the war. Ralph is the representative of Democracy. Elected as the leader he and Piggy his companion keep order and maintain a civilized government. The strength of Ralph's character was supported by the power of World War II. Jack, on the other hand, represents authoritarianism. He rules as a dictator and is the exact opposite of Ralph. Jack is exemplifying the Hitler's and Mussolini's of the world. He is what the world fears and yet follows. This struggle is born at the very beginning and escalates till the very end. The struggle in the book is a negative outlook on life in the future. One other example is the debate over the existence of the beast. The idea of a beast brings all into a state of chaotic excitement in which Ralph and Piggy lose control. Ralph and especially Piggy try to convince everyone that there is no such thing as a beast to maintain order. Jack an... ...he war-paint and sticks of Jack and his followers. He too is chasing men in order to kill, and the dirty children mock the absurd civilized attempt to hide the power of evil. And so when Ralph weeps for the end of innocence, the darkness of man's heart, and the death of his true wise friend, Piggy, he weeps for all the human race." (Cox 164) Such a tragic view of the future of mankind and their nature is a perfect window for people to understand how the impact of the war made the world rethink its ethics and how life was thought of as a punishment in the extreme sense and that there was no hope for the future except fear. This view has since changed but not greatly as one would imagine. The basic ideas are still their and modern society may still relate to this novel. The interpretation may not be exact but from now on mankind will always weep for " the end of innocence, the darkness of man's heart, and" the most disturbing" for all the human race."

Thursday, October 24, 2019

Sample Apa Research Paper

Sample APA Research Paper Sample Title Page Place manuscript page headers one-half inch from the top. Put five spaces between the page header and the page number. Running on Empty 1 Full title, authors, and school name are centered on the page, typed in uppercase and lowercase. Running on Empty: The Effects of Food Deprivation on Concentration and Perseverance Thomas Delancy and Adam Solberg Dordt College 34 Sample Abstract Running on Empty Abstract This study examined the effects of short-term food deprivation on two The abstract summarizes the problem, participants, hypotheses, methods used, results, and conclusions. cognitive abilities—concentration and perseverance. Undergraduate students (N-51) were tested on both a concentration task and a perseverance task after one of three levels of food deprivation: none, 12 hours, or 24 hours. We predicted that food deprivation would impair both concentration scores and perseverance time. Food deprivation had no significant effect o n concentration scores, which is consistent with recent research on the effects of food deprivation (Green et al. , 1995; Green et al. , 1997).However, participants in the 12-hour deprivation group spent significantly less time on the perseverance task than those in both the control and 24-hour deprivation groups, suggesting that short-term deprivation may affect some aspects of cognition and not others. An APA Research Paper Model Thomas Delancy and Adam Solberg wrote the following research paper for a psychology class. As you review their paper, read the side notes and examine the following: ? The use and documentation of their numerous sources. ? The background they provide before getting into their own study results. The scientific language used when reporting their results. Center the title one inch from the top. Double-space throughout. Running on Empty Running on Empty: The Effects of Food Deprivation on Concentration and Perseverance 3 Many things interrupt people’s a bility to focus on a task: distractions, headaches, noisy environments, and even psychological disorders. To some extent, people can control the environmental factors that make it difficult to focus. However, what about internal factors, such as an empty stomach?Can people increase their ability to focus simply by eating regularly? One theory that prompted research on how food intake affects the average person was the glucostatic theory. Several researchers in the 1940s and 1950s suggested that the brain regulates food intake in order to maintain a blood-glucose set point. The idea was that people become hungry when their blood-glucose levels drop significantly below their set point and that they become satisfied after eating, when their blood-glucose levels return to that set point.This theory seemed logical because glucose is the brain’s primary fuel (Pinel, 2000). The earliest investigation of the general effects of food deprivation found that long-term food deprivation (3 6 hours and longer) was associated with sluggishness, depression, irritability, reduced heart rate, and inability to concentrate (Keys, Brozek, The introduction states the topic and the main questions to be explored. The researchers supply background information by discussing past research on the topic. Extensive referencing establishes support for the discussion.Henschel, Mickelsen, & Taylor, 1950). Another study found that fasting for several days produced muscular weakness, irritability, and apathy or depression (Kollar, Slater, Palmer, Docter, & Mandell, 1964). Since that time, research has focused mainly on how nutrition affects cognition. However, as Green, Elliman, and Rogers (1995) point out, the effects of food deprivation on cognition have received comparatively less attention in recent years. Running on Empty The relatively sparse research on food deprivation has left room for 4 urther research. First, much of the research has focused either on chronic The researchers exp lain how their study will add to past research on the topic. starvation at one end of the continuum or on missing a single meal at the other end (Green et al. , 1995). Second, some of the findings have been contradictory. One study found that skipping breakfast impairs certain aspects of cognition, such as problem-solving abilities (Pollitt, Lewis, Garza, & Shulman, 1983). However, other research by M. W. Green, N. A. Elliman, and P. J.Rogers (1995, 1997) has found that food deprivation ranging from missing a single meal to 24 hours without eating does not significantly impair cognition. Third, not all groups of people have been sufficiently studied. Studies have been done on 9–11 year-olds (Pollitt et Clear transitions guide readers through the researchers’ reasoning. al. , 1983), obese subjects (Crumpton, Wine, & Drenick, 1966), college-age men and women (Green et al. , 1995, 1996, 1997), and middle-age males (Kollar et al. , 1964). Fourth, not all cognitive aspects have been studied.In 1995 Green, Elliman, and Rogers studied sustained attention, simple reaction time, and immediate memory; in 1996 they studied attentional bias; and in 1997 they studied simple reaction time, two-finger tapping, recognition memory, and free recall. In 1983, another study focused on reaction time and accuracy, intelligence quotient, and problem solving (Pollitt et al. ). According to some researchers, most of the results so far indicate that cognitive function is not affected significantly by short-term fasting (Green et al. , 1995, p. 246).However, this conclusion seems premature due to the relative lack of research on cognitive functions such as concentration and The researchers support their decision to focus on concentration and perseverance. perseverance. To date, no study has tested perseverance, despite its importance in cognitive functioning. In fact, perseverance may be a better indicator than achievement tests in assessing growth in learning and thinking abilities, as perseverance helps in solving complex problems (Costa, 1984). Another study also recognized that perseverance, better learning techniques, and effort are cognitions worth studying (D’Agostino, 1996).Testing as many aspects of cognition as possible is key because the nature of the task is important when interpreting the link between food deprivation and cognitive performance (Smith & Kendrick, 1992). Running on Empty The researchers state their initial hypotheses. 5 Therefore, the current study helps us understand how short-term food deprivation affects concentration on and perseverance with a difficult task. Specifically, participants deprived of food for 24 hours were expected to perform worse on a concentration test and a perseverance task than those deprived for 12 hours, who in turn were predicted to perform worse than hose who were not deprived of food. Method Headings and subheadings show the paper’s organization. Participants Participants included 51 undergraduate-student volunteers (32 females, 19 males), some of whom received a small amount of extra credit in a college course. The mean college grade point average (GPA) was 3. 19. Potential participants were excluded if they were dieting, menstruating, or taking special medication. Those who were struggling with or had The experiment’s method is described, using the terms and acronyms of the discipline. truggled with an eating disorder were excluded, as were potential participants addicted to nicotine or caffeine. Materials Concentration speed and accuracy were measured using an online numbers-matching test (www. psychtests. com/tests/iq/concentration. html) that consisted of 26 lines of 25 numbers each. In 6 minutes, participants were required to find pairs of numbers in each line that added up to 10. Scores were calculated as the percentage of correctly identified pairs out of Passive voice is used to emphasize the experiment, not the researchers; otherwise, active voice is used. a possible 120.Perseverance was measured with a puzzle that contained five octagons—each of which included a stencil of a specific object (such as an animal or a flower). The octagons were to be placed on top of each other in a specific way to make the silhouette of a rabbit. However, three of the shapes were slightly altered so that the task was impossible. Perseverance scores were calculated as the number of minutes that a participant spent on the puzzle task before giving up. Procedure At an initial meeting, participants gave informed consent. Each consent form contained an assigned identification number and requested the participant’s GPA.Students were then informed that they would be notified by e-mail and telephone about their assignment to one of the Running on Empty three experimental groups. Next, students were given an instruction The experiment is laid out step by step, with time transitions like â€Å"then† and â€Å"next. † 6 sheet. These written instructions, which we also read aloud, explained the experimental conditions, clarified guidelines for the food deprivation period, and specified the time and location of testing. Participants were randomly assigned to one of these conditions using a matched-triplets design based on the GPAs collected at the initial meeting.This design was used to control individual differences in cognitive ability. Two days after the initial meeting, participants were informed of their group assignment and its condition and reminded that, if they were in a food-deprived group, they should not eat anything after 10 a. m. the next day. Participants from the control group were tested at 7:30 p. m. in a designated computer lab on the day the deprivation started. Those in the 12-hour group were tested at 10 p. m. on that same day. Those in the 24-hour group were tested at 10:40 a. m. on the following day.At their assigned time, participants arrived at a computer lab for testing. Ea ch participant was given written testing instructions, which were also read aloud. The online concentration test had already Attention is shown to the control features. been loaded on the computers for participants before they arrived for testing, so shortly after they arrived they proceeded to complete the test. Immediately after all participants had completed the test and their scores were recorded, participants were each given the silhouette puzzle and instructed how to proceed.In addition, they were told that (1) they would have an unlimited amount of time to complete the task, and (2) they were not to tell any other participant whether they had completed the puzzle or simply given up. This procedure was followed to prevent the group influence of some participants seeing others give up. Any participant still working on the puzzle after 40 minutes was stopped to keep the time of the study manageable. Immediately after each participant stopped working on the puzzle, he/she gave de mographic information and completed a few manipulation-check items. We then debriefed and dismissed each participant outside of the lab.Running on Empty Results The writers summarize their findings, including problems encountered. 7 Perseverance data from one control-group participant were eliminated because she had to leave the session early. Concentration data from another control-group participant were dropped because he did not complete the test correctly. Three manipulation-check questions indicated that each participant correctly perceived his or her deprivation condition and had followed the rules for it. The average concentration score was 77. 78 (SD = 14. 21), which was very good considering that anything over 50 percent is labeled â€Å"good† or â€Å"above average. The average time spent on the puzzle was 24. 00 minutes (SD = 10. 16), with a maximum of 40 minutes allowed. We predicted that participants in the 24-hour deprivation group would perform worse on the co ncentration test and the perseverance task than those in the 12-hour group, who in turn would perform worse than those in the control group. A one-way analysis of variance (ANOVA) showed no significant effect of deprivation condition on concentration, F(2,46) = 1. 06, p = . 36 (see Figure 1). Another one-way ANOVA indicated Figure 1. 100 â€Å"See Figure 1† sends readers to a figure (graph, photograph, chart, or drawing) contained in the paper.All figures and illustrations (other than tables) are numbered in the order that they are first mentioned in the text. Mean score on concentration test 90 80 70 60 50 No deprivation 12-hour deprivation 24-hour deprivation Deprivation Condition The researchers restate their hypotheses and the results, and go on to interpret those results. Running on Empty a significant effect of deprivation condition on perseverance time, F(2,47) = 7. 41, p < . 05. Post-hoc Tukey tests indicated that the 12-hour deprivation group (M = 17. 79, SD = 7. 84) spent significantly less time on the perseverance task than either the control group (M = 26. 0, SD = 6. 20) or the 24-hour group (M = 28. 75, SD = 12. 11), with no significant difference between the latter two groups (see Figure 2). No significant effect was found for gender either generally or with specific deprivation conditions, Fs < 1. 00. Unexpectedly, food deprivation had no significant effect on concentration scores. Overall, we found support for our hypothesis that 12 hours of food deprivation would significantly impair perseverance when compared to no deprivation. Unexpectedly, 24 hours 8 of food deprivation did not significantly affect perseverance relative to the control group.Also unexpectedly, food deprivation did not significantly affect concentration scores. Figure 2. 30 28 26 24 22 20 18 16 14 12 10 8 6 4 2 0 No deprivation 12-hour deprivation 24-hour deprivation Mean score on perseverance test Deprivation Condition Discussion The purpose of this study was to test how different levels of food deprivation affect concentration on and perseverance with difficult tasks. Running on Empty they would score on the concentration task, and the less time they would spend on the perseverance task. In this study, those deprived of food did 9 We predicted that the longer people had been deprived of food, the lower ive up more quickly on the puzzle, but only in the 12-hour group. Thus, the hypothesis was partially supported for the perseverance task. However, concentration was found to be unaffected by food deprivation, and thus the hypothesis was not supported for that task. The findings of this study are consistent with those of Green et al. The writers speculate on possible explanations for the unexpected results. (1995), where short-term food deprivation did not affect some aspects of cognition, including attentional focus. Taken together, these findings suggest that concentration is not significantly impaired by short-term food deprivation.The findings on perseverance, however, are not as easily explained. We surmise that the participants in the 12-hour group gave up more quickly on the perseverance task because of their hunger produced by the food deprivation. But why, then, did those in the 24-hour group fail to yield the same effect? We postulate that this result can be explained by the concept of â€Å"learned industriousness,† wherein participants who perform one difficult task do better on a subsequent task than the participants who never took the initial task (Eisenberger & Leonard, 1980; Hickman, Stromme, & Lippman, 1998).Because participants had successfully completed 24 hours of fasting already, their tendency to persevere had already been increased, if only temporarily. Another possible explanation is that the motivational state of a participant may be a significant determinant of behavior under testing (Saugstad, 1967). This idea may also explain the short perseverance times in the 12-hour group: because these participants took the tests at 10 p. m. , a prime time of the night for conducting business and socializing on a college campus, they may have been less motivated to take the time to work on the puzzle.Research on food deprivation and cognition could continue in several directions. First, other aspects of cognition may be affected by short-term food deprivation, such as reading comprehension or motivation. With respect to this latter topic, some students in this study reported decreased motivation to complete the tasks because of a desire to eat immediately Running on Empty took the tests may have influenced the results: those in the 24-hour 10 after the testing.In addition, the time of day when the respective groups group took the tests in the morning and may have been fresher and more relaxed than those in the 12-hour group, who took the tests at night. Perhaps, then, the motivation level of food-deprived participants could be effectively tested. Second, longer-term food deprivati on periods, such as those experienced by people fasting for religious reasons, could be explored. It is possible that cognitive function fluctuates over the duration of deprivation. Studies could ask how long a person can remain focused despite a lack of nutrition.Third, and perhaps most fascinating, studies could explore how food deprivation affects learned industriousness. As stated above, one possible explanation for the better perseverance times in the 24-hour group could be that they spontaneously improved their perseverance faculties by simply forcing themselves not to eat for 24 hours. Therefore, research could study how food deprivation affects the acquisition of perseverance. In conclusion, the results of this study provide some fascinating The conclusion summarizes the outcomes, stresses the experiment’s value, and anticipates further advances on the topic. nsights into the cognitive and physiological effects of skipping meals. Contrary to what we predicted, a perso n may indeed be very capable of concentrating after not eating for many hours. On the other hand, if one is taking a long test or working long hours at a tedious task that requires perseverance, one may be hindered by not eating for a short time, as shown by the 12-hour group’s performance on the perseverance task. Many people—students, working mothers, and those interested in fasting, to mention a few—have to deal with short-term food deprivation, intentional or unintentional.This research and other research to follow will contribute to knowledge of the disadvantages—and possible advantages—of skipping meals. The mixed results of this study suggest that we have much more to learn about short-term food deprivation. Running on Empty References All works referred to in the paper appear on the reference page, listed alphabetically by author (or title). 11 Costa, A. L. (1984). Thinking: How do we know students are getting better at it? Roeper Review, 6 , 197–199. Crumpton, E. , Wine, D. B. , & Drenick, E. J. (1966). Starvation: Stress or satisfaction?Journal of the American Medical Association, 196, 394–396. D’Agostino, C. A. F. (1996). Testing a social-cognitive model of achievement motivation. -Dissertation Abstracts International Section A: Humanities & Social Sciences, 57, 1985. Eisenberger, R. , & Leonard, J. M. (1980). Effects of conceptual task Each entry follows APA guidelines for listing authors, dates, titles, and publishing information. difficulty on generalized persistence. American Journal of Psychology, 93, 285–298. Green, M. W. , Elliman, N. A. , & Rogers, P. J. (1995). Lack of effect of short-term fasting on cognitive function.Journal of Psychiatric Research, 29, 245–253. Green, M. W. , Elliman, N. A. , & Rogers, P. J. (1996). Hunger, caloric preloading, and the selective processing of food and body shape words. British Journal of Clinical Psychology, 35, 143–151. Green, M. W. , Elliman, N. A. , & Rogers, P. J. (1997). The study effects of food deprivation and incentive motivation on blood glucose levels and cognitive function. Psychopharmacology, 134, 88–94. Hickman, K. L. , Stromme, C. , & Lippman, L. G. (1998). Learned Capitalization, punctuation, and hanging indentation are consistent with APA format. ndustriousness: Replication in principle. Journal of General Psychology, 125, 213–217. Keys, A. , Brozek, J. , Henschel, A. , Mickelsen, O. , & Taylor, H. L. (1950). The biology of human starvation (Vol. 2). Minneapolis: University of Minnesota Press. Kollar, E. J. , Slater, G. R. , Palmer, J. O. , Docter, R. F. , & Mandell, A. J. (1964). Measurement of stress in fasting man. Archives of General Psychology, 11, 113–125. Pinel, J. P. (2000). Biopsychology (4th ed. ). Boston: Allyn and Bacon. Running on Empty 12 Pollitt, E. , Lewis, N. L. , Garza, C. , & Shulman, R. J. (1982–1983). Fasting and cognitive function.Journal of P sychiatric Research, 17, 169–174. Saugstad, P. (1967). Effect of food deprivation on perception-cognition: A comment [Comment on the article by David L. Wolitzky]. Psychological Bulletin, 68, 345–346. Smith, A. P. , & Kendrick, A. M. (1992). Meals and performance. In A. P. Smith & D. M. Jones (Eds. ), Handbook of human performance: Vol. 2, Health and performance (pp. 1–23). San Diego: Academic Press. Smith, A. P. , Kendrick, A. M. , & Maben, A. L. (1992). Effects of breakfast and caffeine on performance and mood in the late morning and after lunch. Neuropsychobiology, 26, 198–204.

Wednesday, October 23, 2019

Disruptive Technology

Disruptive Technology Abstract The objective of this project is to explain the emergence of disruptive technology in the IT industry that will enable and help the organizations growth in a cost effective manner. One of the hottest topics in today’s IT corridors is the uses and benefits of virtualization technologies. IT companies all over the globe are executing virtualization for a diversity of business requirements, driven by prospects to progress server flexibility and decrease operational costs. InfoTech Solutions being dominant IT solution provider can be broadly benefited by implementing the virtualization. This paper is intended to provide the complete details of virtualization, its advantages and strategies for SMEs to migrate. Introduction 2009 IT buzz word is ‘Virtualization’. Small, medium and large business organizations seriously started to re organize their e-business strategy towards the successful disruptive technology of virtualization. Virtualization of business applications permits IT operations in organizations of all sizes to decrease costs, progress IT services and to reduce risk management. The most remarkable cost savings are the effect of diminishing hardware, utilization of space and energy, as well as the productivity gains leads to cost savings. In the Small business sector virtualization can be defined as a technology that permits application workloads to be maintained independent of host hardware. Several applications can share a sole, physical server. Workloads can be rotated from one host to another without any downtime. IT infrastructure can be managed as a pool of resources, rather than a collection of physical devices. Disruptive Technology Disruptive Technology or disruptive Innovation is an innovation that makes a product or service better by reducing the price or changing the market dramatically in a way it does not expect. Christensen (2000) stated that ‘‘disruptive technologies are typically simpler, cheaper, and more reliable and convenient than established technologies’’ (p. 192). Before we do any research on disruptive technology it is useful and necessary to summarize the Christensen’s notion of disruptive technology. Christensen was projected as â€Å"guru† by the business (Scherreik, 2000). His work has been broadly referred by scholars or researchers working in different disciplines and topics like the development of new product, strategies like marketing and management and so on. In his book â€Å"The Innovator’s Dilemma,† (Christensen 1997) Christensen had done significant observations about the circumstances under which companies or organizations that are established lose market to an entrant that was referred as disruptive technology. This theory became extremely influential in the management decision making process (Vaishnav, 2008). Christensen’s arguments, from the academic references (Christensen 1992; Christensen and Rosenbloom 1995; Christensen, Suarez et al. 1996) instead of looking in to his famous paperbacks (Christensen 1997; Christensen and Raynor 2003), explains that the entrant might have more advantage then the incumbent and it requires the understanding of three important forces: technological capability (Henderson and Clark 1990), organizational dynamics (Anderson and Tushman 1990), and value (Christensen and Rosenbloom 1995). He argued further that company’s competitive strategy and mainly its earlier choices of markets to serve, decides its perceptions of economic value in new technology, and improves the rewards it will expect to obtain through innovation. Christensen (1995) classifies new technology into two types: sustaining and disruptive. Sustaining technology depends on rising improvements to an already established technology, at the same time Disruptive technology is new, and replaces an established technology unexpectedly. The disruptive technologies may have lack of refinement and often may have performance problems because these are fresh and may not have a verified practical application yet. It takes a lot of time and energy to create something new and innovative that will significantly influence the way that things are done. Most of the organizations are concerned about maintaining and sustaining their products and technologies instead of creating something new and different that may better the situation. They will make change and minor modifications to improve the current product. These changes will give a bit of new life to those products so that they can increase the sales temporarily and keeps the technology a bit longer. Disruptive technologies generally emerge from outside to the mainstream. For example the light bulb was not invented by the candle industry seeking to improve the results. Normally owners of recognized technology organizations tend to focus on their increased improvements to their existing products and try to avoid potential threat to their business (Techcom, 2004). Compared to sustaining products, disruptive technologies take steps into various directions, coming up with ideas that would work against with products in the current markets and could potentially replace the mainstream products that are being used. So it is not considered as disruption, but considered as innovation. It is not only replacing, but improving ahead what we have now making things enhanced, quicker, and mostly cooler. Either it may be disruptive or innovative; technologies are changing the â€Å"future wave† in to reality and slowly started occupying the world. On one hand, the warning of disruption makes incumbents suspicious about losing the market, while emerging new entrants confident of inventing the next disruptive technology. Perhaps, such expects and worries produce more competition in the market place. It seems that every year there is a laundry list of products and technologies that are going to â€Å"change the world as we know it. † One that seems to have potential to achieve the title of a disruptive technology is something that has been around for a while now: virtualization. Gartner (2008) describes disruptive technology as â€Å"causing major change in the accepted way of doing things, including business models, processes, revenue streams, industry dynamics and consumer behaviors†. Virtualization is one of the top ten disruptive technologies listed by Gartner (Gartner. com). This virtualization technology is not new to the world. As computers turn into more common though, it became obvious that simply time-sharing a single computer was not always ideal because the systems can be misused intentionally or unintentionally and that may crash the entire system to alt. To avoid this multi system concept emerged. This multi system concept provided a lot of advantages in the organizational environment like Privacy, security to data, Performance and isolation. For example in organization culture it is required to keep certain activities performing from different systems. A testing application run in a system sometimes may halt the system or crash the syst em completely. So it is obvious to run the application in a separate system that won’t affect the net work. On the other hand placing different applications in the same system may reduce the performance of the system as they access the same available system resources like memory, network input/output, Hard disk input/output and priority scheduling (Barham, at,. el, 2003). The performance of the system and application will be greatly improved if the applications are placed in different systems so that they can have its own resources. It is very difficult for most of the organization to invest on multiple systems and at times it is hard to keep all the systems busy to its full potential and difficult to maintain and also the asset value keeps depreciating. So investing in multiple systems becomes waste at times, however having multi systems obviously has its own advantages. Considering this cost and waste, IBM introduced the first virtual machine in 1960 that made one system to be as it was multiple. In the starting, this fresh technology allowed individuals to run multiple applications at the same time to increase the performance of person and computer to do multitask abilities. Along with this multi tasking factor created by virtualization, it was also a great money saver. The multitasking ability of virtualization that allowed computers to do more than one task at a time become more valuable to companies, so that they can leverage their investments completely (VMWare. com). Virtualization is a hyped and much discussed topic recently due to its potential characteristics. Firstly it has capacity to use the computer resources in a better potential way maximizing the company’s hardware investment. It is estimated that only 25% of the total resources are utilized in an average data center. By virtualization large number older systems can be replaced by a highly modern, reliable and scalable enterprise servers reduce the hardware and infrastructure cost significantly. It is not just server consolidation, virtualization offers much more than that like the ability to suspend, resume, checkpoint, and migrate running Chesbrough (1999a, 1999b). It is exceptionally useful in handling the long running jobs. If a long running job is assigned to a virtual machine with checkpoints enabled, in any case it stops or hangs, it can be restarted from where it stopped instead of starting from the beginning. The main deference of today’s virtualization compared to the older mainframe age is that it can be allocated any of the service’s choice location and is called as of Distributed Virtual Machines that opens a whole lot of possibilities like monitoring of network, validating security policy and the distribution of content (Peterson et, al, 2002). The way virtual technology breaks the single operating system boundaries is what made it to be a significant part of technology that leads in to the disruptive technology group. It allows the users to run multiple applications in multiple operating systems on a single computer simultaneously. (VMWare. com, 2009) Basically, this new move will have a single physical server and that hardware can be made in to software that will use all the available hardware resources to create a virtual mirror of it. The replications created can be used as software based computers to run multiple applications at the same time. These software based computers will have the complete attributes like RAM, CPU and NIC interface of the physical computers. The only different is that there will be only one system instead of multiple running different operating systems (VMWare. com, 2009) called guest machines. Virtual Machine Monitor Guest virtual machines can be hosted by a method called as Virtual Machine Monitor or VMM. This should go hand-in-hand with virtual machines. In realty, VMM is referred as the host and the hosted virtual machines are referred as guests. The physical resources required by the guests are offered by the software layer of the VMM or host. The following figure represents the relationship between VMM and guests. The VMM supplies the required virtual versions of processor, system devices such as I/O devices, storage, memory, etc. It also presents separation between the virtual machines and it hosts so that issues in one cannot effect another. As per the research conducted by Springboard Research study recently, the spending related to virtualization software and services will reach to 1. 5 billion US dollar by the end of 2010. The research also adds that 50% of CIOs interested in deploying virtualization to overcome the issues like poor performance system’s low capacity utilization and to face the challenges of developing IT infrastructure. TheInfoPro, a research company states that more than 50% of new servers installed were based on virtualization and this number is expected to grow up to 80% by the end of 2012. V irtualization will be the maximum impact method modifying infrastructure and operations by 2012. In reference to Gartner, Inc. 008, Virtualization will renovate how IT is bought, planed, deployed and managed by the companies. As a result, it is generating a fresh wave of competition among infrastructure vendors that will result in market negotiation and consolidation over the coming years. The market share for PC virtualization is also booming rapidly. The growth is expected to be 660 million compared to 5 million in till 2007. Virtualization strategy for mid-sized businesses Virtualization has turn out to be a significant IT strategy for small and mid-sized business (SMEs) organizations. It not only offers the cost savings, but answers business continuity issues and allows IT managers to: †¢Manage and reduce the downtime caused due to the planed hardware maintenance that will reduce the down time resulting higher system availability. †¢Test, investigate and execute the disaster recovery plans. †¢Secure the data, as well as non-destructive backup and restore Processes †¢Check the stability and real-time workloads In these competitive demanding times, SME businesses organizations require to simplify the IT infrastructure and cut costs. However, with various storage, server and network requirements, and also sometimes might not have sufficient physical space to store and maintain systems, the company’s chances can be restricted by both less physical space and budget concerns. The virtualization can offer solutions for these kind issues and SMEs can significantly benefit not only from server consolidation, but also with affordable business continuity. What is virtualization for mid-sized businesses? In the Small business sector virtualization can be defined as a technology that permits application workloads to be maintained independent of host hardware. Several applications can share a sole, physical server. Workloads can be rotated from one host to another without any downtime. IT infrastructure can be managed as a pool of resources, rather than a collection of physical devices. It is assumed that the virtualization is just for large enterprises. But in fact it is not. It is a widely-established technology that decreases hardware requirements, increases use of hardware resources, modernizes management and diminish energy consumption. Economics of virtualization for the midmarket The research by VMWare. om (2009) shows that the SMEs invested on virtualization strategy has received their return of investment (ROI) in less than year. In certain cases, this can be less than seven months with the latest Intel Xeon 5500 series processors http://www-03. ibm. com/systems/resources/6412_Virtualization_Strategy_-_US_White_Paper_-_Apr_24-09. pdf [accessed on 04/09/09] The below image explains how the virtualization simplified a large utility company infrastructure with 1000 systems with racks and cables to a dramatically simpler form. Source : http://www-03. ibm. om/systems/resources/6412_Virtualization_Strategy_-_US_White_Paper_-_Apr_24-09. pdf [accessed on 04/09/09] Virtualization SME advantages 1. Virtualization and management suite presents a stretchable and low -cost development platform and an environment with high capability. 2. Virtualization provides the facility to rotate virtual machines that are live between physical hosts. This ability numerous advantages like business continuity, recovery in disaster, balancing of workload, and even energy-savings by permitting running applications to be exchanged between physical servers without disturbing the service. . Virtualization can help you take full advantage of the value of IT Pounds: †¢Business alertness in varying markets †¢A flexible IT infrastructure that can scale with business growth †¢ High level performance that can lever the majority of d emanding applications †¢ An industry-standard platform architecture with intellectual management tools †¢ Servers with enterprise attributes—regardless of their size or form factor 4. Virtualization can help you to advance IT services: †¢The provision to maintain the workloads rapidly by setting automatic maintenance process that can be configured to weeks, days or even to inutes. †¢Improve IT responsiveness to business needs †¢Down times can be eliminate by shifting the †¢To a great extent decrease, even eliminate unplanned downtime. †¢Reducing costs in technical support, training and mainte ¬nance. Conclusion: This is the right time for Small and mid-sized businesses like InfoTech Solutions to implement a virtualization strategy. Virtualization acts as a significant element of the IT strategy for businesses of all sizes, with a wide range of benefits and advantages for all sized businesses. It helps InfoTech Solutions to construct an IT infrastructure with enterprise-class facilities and with a with a form factor of Return Of Investment. It is expected that more than 80% of organizations will implement virtualization by the end of 2012. So SME organizations like InfoTech Solutions should seriously look in to their E-business strategy for considering the virtualization or they may be left behind the competitors. References 1. Adner, Ron (2002). When Are Technologies Disruptive? A Demand- Based View of the Emergence of Competition. Strategic Management Journal 23(8):667–88. . Anderson, P. and M. L. Tushman (1990). â€Å"Technological Discontinuities and Dominant Designs – a Cyclical Model of Technological-Change. † Administrative Science Quarterly 35(4): 604-633. 3. Barham, B. Dragovic, K. Fraser, S. Hand, T. Harris, A. Ho, R. Neugebauer, I. Pratt, and A. Warfield. Xen and the art of virtualization. In Proc. 19th SOSP, October 2003. 4. Chesbrough, Hen ry (1999a). Arrested Development: The Experience of European Hard-Disk-Drive Firms in Comparison with U. S. and Japanese Firms. Journal of Evolutionary Economics 9(3):287–329. 5. Chintan Vaishnav , (2008) Does Technology Disruption Always Mean Industry Disruption, Massachusetts Institute of Technology 6. Christensen, Clayton M. (2000). The Innovator’s Dilemma. When New Technologies Cause Great Firms to Fail. Boston, MA: Harvard Business School Press. 7. Christensen, C. M. (1992). â€Å"Exploring the limits of technology S-curve: Architecture Technologies. † Production and Operations Management 1(4). 8. Christensen, C. M. and R. S. Rosenbloom (1995). â€Å"Explaining the Attackers Advantage -Technological Paradigms, Organizational Dynamics, and the Value Network. † Research Policy 24(2): 233-257. . Christensen, C. M. , F. F. Suarez, et al. (1996). Strategies for survival in fast-changing industries. Cambridge, MA, International Center for Research on the Management 10. Christensen, C. M. (1992). â€Å"Exploring the limits of technology S-curve: Component Technologies. † Production and Operations Management 1(4). 11. Christensen, C. M. (1997). The innovator's dilemma : when new technologies cause great firms to fail. Boston, Mass. , Harvard Business School Press. 12. Christensen, C. M. and M. E. Raynor (2003). The innovator's solution : creating and sustaining successful growth. Boston, Mass. , Harvard Business School Press. 13. Cohan, Peter S. (2000). The Dilemma of the ‘‘Innovator’s Dilemma’’: Clayton Christensen’s Management Theories Are Suddenly All the Rage, but Are They Ripe for Disruption? Industry Standard, January 10, 2000. 14. Gartner Says; http://www. gartner. com/it/page. jsp? id=638207 [ accessed on 04/09/09] 15. Henderson, R. M. and K. B. Clark (1990). â€Å"Architectural Innovation – the Reconfiguration of Existing Product Technologies and the Failure of Established Firms. † Administrative Science Quarterly 35(1): 9-30. 16. MacMillan, Ian C. nd McGrath, Rita Gunther (2000). Technology Strategy in Lumpy Market Landscapes. In: Wharton on Managing Emerging Technologies. G. S. Day, P. J. H. Schoemaker, and R. E. Gunther (eds. ). New York: Wiley, 150–171. 17. Scherreik, Susan (2000). When a Guru Manages Money. Business Week, July 31, 2000. 18. L. Peterson, T. Anderson, D. Culler, and T. R oscoe, â€Å"A Blueprint for Introducing Disruptive Technology into the Internet,† in Proceedings of HotNets I, Princeton, NJ, October 2002. 19. â€Å"VirtualizationBasics. † VMWare. com. http://www. vmware. com/virtualization/ [Accessed on 04/09/09] Disruptive Technology One of the most consistent patterns in business is the failure of leading companies to stay at the top of their industries when technologies or markets change. Goodyear and Firestone entered the radial-tire market quite late. Xerox let Canon create the small-copier market. Bucyrus-Erie allowed Caterpillar and Deere to take over the mechanical excavator market. Sears gave way to Wal-Mart. The pattern of failure has been especially striking in the computer industry. IBM dominated the mainframe market but missed by years the emergence of minicomputers, which were technologically much simpler than mainframes. Digital Equipment dominated the minicomputer market with innovations like its VAX architecture but missed the personal-computer market almost completely. Apple Computer led the world of personal computing and established the standard for user-friendly computing but lagged five years behind the leaders in bringing its portable computer to market. Why is it that companies like these invest aggressively-and successfully-in the technologies necessary to retain their current customers but then fail to make certain other technological investments that customers of the future will demand? Undoubtedly, bureaucracy, arrogance, tired executive blood, poor planning, and short-term investment horizons have all played a role. But a more fundamental reason lies at the heart of the paradox: leading companies succumb to one of the most popular, and valuable, management dogmas. They stay close to their customers. Although most managers like to think they are in control, customers wield extraordinary power in directing a company's investments. Before managers decide to launch a technology, develop a product, build a plant, or establish new channels of distribution, they must look to their customers first: Do their customers want it? How big will the market be? Will the investment be profitable? The more astutely managers ask and answer these questions, the more completely their investments will be aligned with the needs of their Customers. This is the way a well-managed company should operate. Right? But what happens when customers reject a new technology, product concept, or way of doing business because it does not address their needs as effectively as a company's current approach? The large photocopying centers that represented the core f Xerox's customer base at first had no use for small, slow tabletop copiers. The excavation contractors that had relied on Bucyrus-Erie's big-bucket steam- and diesel-powered cable shovels didn't want hydraulic excavators because, initially they were small and weak. IBM's large commercial, government, and industrial customers saw no immediate use for minicomputers. In each instance, companies listened to their customers, gave them the product performance they were looking for , and, in the end, were hurt by the very technologies their customers led them to ignore. We have seen this pattern repeatedly in an ongoing study of leading companies in a variety of industries that have confronted technological change. The research shows that most well-managed, established companies are consistently ahead of their industries in developing and commercializing new technologies- from incremental improvements to radically new approaches- as long as those technologies address the next-generation performance needs of their customers. However, these same companies are rarely in the forefront of commercializing new technologies that don't initially meet the needs of mainstream customers and appeal only to small or emerging markets. Using the rational, analytical investment processes that most well-managed companies have developed, it is nearly impossible to build a cogent case for diverting resources from known customer needs in established markets to markets and customers that seem insignificant or do not yet exist. After all, meeting the needs of established customers and fending off competitors takes all the resources a company has, and then some. In well-managed companies, the processes used to identify customers' needs, forecast technological trends, assess profitability, allocate resources across competing proposals for investment, and take new products to market are focused-for all the right reasons-on current customers and markets. These processes are designed to weed out proposed products and technologies that do not address customers' needs. In fact, the processes and incentives that companies use to keep focused on their main customers work so well that they blind those companies to important new technologies in emerging markets. Many companies have learned the hard way the perils of ignoring new technologies that do not initially meet the needs of mainstream customers. For example, although personal computers did not meet the requirements of mainstream minicomputer users in the early 1980s, the computing power of the desktop machines mproved at a much faster rate than minicomputer users' demands for computing power did. As a result, personal computers caught up with the computing needs of many of the customers of Wang, Prime, Nixdorf, Data General, and Digital Equipment. Today they are performance-competitive with minicomputers in many applications. For the minicomputer makers, keeping close to mainstream customers and ignoring what were initially low-performance desktop technologies used by seemingly insignificant cus tomers in emerging markets was a rational decision-but one that proved disastrous. The technological changes that damage established companies are usually not radically new or difficult from a technological point of view. They do, however, have two important characteristics: First, they typically present a different package of performance attributes- ones that, at least at me outset, are not valued by existing customers. Second, the performance attributes that existing customers do value improve at such a rapid rate that the new technology can later invade those established markets. Only at this point will mainstream customers want the technology. Unfortunately for the established suppliers, by then it is often too late: the pioneers of the new technology dominate the market. It follows, then, that senior executives must first be able to spot the technologies that seem to fall into this category. Next, to commercialize and develop the new technologies, managers must protect them from the processes and incentives that are geared to serving established customers. And the only way to protect them is to create organizations that are completely independent from the mainstream business. No industry of staying too close to customers more dramatically than the hard-disk-drive industry. Between 1976 and 1992, disk-drive performance improved at a stunning rate: the physical size of a 100-megabyte (MB) system shrank from 5,400 to 8 cubic inches, and the cost per MB fell from $560 to $5. Technological change, of course, drove these breathtaking achievements. About half of the improvement came from a host of radical advances that were critical to continued improvements in disk-drive performance; the other half came from incremental advances. The pattern in the disk-drive industry has been repeated in mar/y other industries: the leading, established companies have consistently led the industry in developing and adopting new technologies that their customers demanded- even when those technologies required completely different technological competencies and manufacturing capabilities from the ones the companies had. In spite of this aggressive technological posture, no single disk-drive manufacturer has been able to dominate the industry for more than a few years. A series of companies have entered the business and risen to prominence, only to be toppled by newcomers who pursued technologies that at first did not meet the needs of mainstream customers. As a result, not one of the independent disk-drive companies that existed in 1976 survives today. To explain the differences in the impact of certain kinds of technological innovations on a given industry, the concept of performance trajectories – the rate at which the performance of a product has improved, and is expected to improve, over time – can be helpful. Almost every industry has a critical performance trajectory. In mechanical excavators, the critical trajectory is the annual improvement in cubic yards of earth moved per minute. In photocopiers, an important performance trajectory is improvement in number of copies per minute. In disk drives, one crucial measure of performance is storage capacity, which has advanced 50% each year on average for a given size of drive. Different types of technological innovations affect performance trajectories in different ways. On the one hand, sustaining technologies tend to maintain a rate of improvement; that is, they give customers something more or better in the attributes they already value. For example, thin-film components in disk drives, which replaced conventional ferrite heads and oxide disks between 1982 and 1990, enabled information to be recorded more densely on disks. Engineers had been pushing the limits of the' performance they could wring from ferrite heads and oxide disks, but the drives employing these technologies seemed to have reached the natural limits of an S curve. At that point, new thin-film technologies emerged that restored- or sustained-the historical trajectory of performance improvement. On the other hand, disruptive technologies introduce a very different package of attributes from the one mainstream customers historically value, and they often perform far worse along one or two dimensions that are particularly important to those customers. As a rule, mainstream customers are unwilling to use a disruptive product in applications they know and understand. At first, then, disruptive technologies tend to be used and valued only in new markets or new applications; in fact, they generally make possible the emergence of new markets. For example, Sony's early transistor adios sacrificed sound fidelity but created a market for portable radios by offering a new and different package of attributes- small size, light weight, and portability. In the history of the hard-disk-drive industry, the leaders stumbled at each point of disruptive technological change: when the diameter of disk drives shrank from the original 14 inches to 8 inches, then to 5. 25 inches, and finally to 3. 5 inches. Each of these new architectures, initially offered the market substantially less storage capacity than the typical user in the established market required. For example, the 8-inch drive offered 20 MB when it was introduced, while the primary market for disk drives at that time-mainframes-required 200 MB on average. Not surprisingly, the leading computer manufacturers rejected the 8-inch architecture at first. As a result, their suppliers, whose mainstream products consisted of 14-inch drives with more than 200 MB of capacity, did not pursue the disruptive products aggressively. The pattern was repeated when the 5. 25-inch and 3. 5-inch drives emerged: established computer makers rejected the drives as inadequate, and, in turn, their disk-drive suppliers ignored them as well. But while they offered less storage capacity, the disruptive architectures created other important attributes- internal power supplies and smaller size (8-inch drives); still smaller size and low-cost stepper motors (5. 25-inch drives); and ruggedness, light weight, and low-power consumption (3. 5-inch drives). From the late 1970s to the mid-1980s, the availability of the three drives made possible the development of new markets for minicomputers, desktop PCs, and portable computers, respectively. Although the smaller drives represented disruptive technological change, each was technologically straightforward. In fact, there were engineers at many leading companies who championed the new technologies and built working prototypes with bootlegged resources before management gave a formal go-ahead. Still, the leading companies could not move the products through their organizations and into the market in a timely way. Each time a disruptive technology emerged, between one-half and two-thirds of the established manufacturers failed to introduce models employing the new architecture-in stark contrast to their timely launches of critical sustaining technologies. Those companies that finally did launch new models typically lagged behind entrant companies by two years-eons in an industry whose products' life cycles are often two y. ears. Three waves of entrant companies led these revolutions; they first captured the new markets and then dethroned the leading companies in the mainstream markets. How could technologies that were initially inferior and useful only to new markets eventually threaten leading companies in established markets? Once the disruptive architectures became established in their new markets, sustaining innovations raised each architecture's performance along steep trajectories- so steep that the performance available from each architecture soon satisfied the needs of customers in the established markets. For example, the 5. 25-inch drive, whose initial 5 MB of capacity in 1980 was only a fraction of the capacity that the minicomputer market needed, became fully performance-competitive in the minicomputer market by 1986 and in the mainframe market by 1991. (See the graph â€Å"How Disk-Drive Performance Met Market Needs. ) A company's revenue and cost structures play a critical role in the way it evaluates proposed technological innovations. Generally, disruptive technologies look financially unattractive to established companies. The potential revenues from the discernible markets are small, and it is often difficult to project how big the markets for the technology will be over the long term. As a result, managers typically conclude that the technology cannot make a meaningful contribution to corporate growth and, therefore, that it is not worth the management effort required to develop it. In addition, established companies have often installed higher cost structures to serve sustaining technologies than those required by disruptive technologies. As a result, managers typically see themselves as having two choices when deciding whether to pursue disruptive technologies. One is to go downmarket and accept the lower profit margins of the emerging markets that the disruptive technologies will initially serve. The other is to go upmarket with sustaining technologies and enter market segments whose profit margins are alluringly high. For example, the margins of IBM's mainframes are still higher than those of PCs). Any rational resource-allocation process in companies serving established markets will choose going upmarket rather than going down. Managers of companies that have championed disruptive technologies in emerging markets look at the world quite differently. Without the high cost structures of their established counterparts, these companies find the emerging markets appealing. Once the companies have secured a foothold in the markets and mproved the performance of their technologies, the established markets above them, served by high-cost suppliers, look appetizing. When they do attack, the entrant companies find the established players to be easy and unprepared opponents because the opponents have been looking upmarket themselves, discounting the threat from below. It is tempting to stop at this point and conclude that a valuable lesson has been learned: managers can avoid missing the next wave by paying careful attention to potentially disruptive technologies that do not meet current customers' needs. But recognizing the pattern and figuring out how to break it are two different things. Although entrants invaded established markets with new technologies three times in succession, none of the established leaders in the disk-drive industry seemed to learn from the experiences of those that fell before them. Management myopia or lack of foresight cannot explain these failures. The problem is that managers keep doing what has worked in the past: serving the rapidly growing needs of their current customers. The processes that successful, well-managed companies have developed to allocate resources among proposed investments are incapable of funneling resources into programs that current customers explicitly don't want and whose profit margins seem unattractive. Managing the development of new technology is tightly linked to a company's investment processes. Most strategic proposals-to add capacity or to develop new products or processes- take shape at the lower levels of organizations in engineering groups or project teams. Companies then use analytical planning and budgeting systems to select from among the candidates competing for funds. Proposals to create new businesses in emerging markets are particularly challenging to assess because they depend on notoriously unreliable estimates of market size. Because managers are evaluated on their ability to place the right bets, it is not surprising that in well-managed companies, mid- and top-level managers back projects in which the market seems assured. By staying close to lead customers, as they have been trained to do, managers focus resources on fulfilling the requirements of those reliable customers that can be served profitably. Risk is reduced-and careers are safeguarded-by giving known customers what they want. Seagate Technology's experience illustrates the consequences of relying on such resource-allocation processes to evaluate disruptive technologies. By almost any measure, Seagate, based in Scotts Valley, California, was one of the most successful and aggressively' managed companies in the history of the microelectronics industry: from its inception in 1980, Seagate's revenues had grown to more than $700 million by 1986. It had pioneered 5. 5-inch hard-disk drives and was the main supplier of them to IBM and IBM-compatible personal-computer manufacturers. The company was the leading manufacturer of 5. 25-inch drives at the time the disruptive 3. 5-inch drives emerged in the mid-1980s. Engineers at Seagate were the second in the industry to develop working prototypes of 3. 5-inch drives. By early 1985, they had made more than 80 such models with a low level of company funding. The engineers forwarded the new models to key marketing executives, and the trade press reported that Seagate was actively developing 3. -inch drives. But Seagate's principal customers- IBM and other manufacturers of AT-class personal computers- showed no interest in the new drives. They wanted to incorporate 40-MB and 60-MB drives in their next-generation models, and Seagate's early 3. 5-inch prototypes packed only 10 MB. In response, Seagate's marketing executives lowered their sales forecasts for the new ‘disk drives. Manufacturing and financial executives at the company pointed out another drawback to the 3. 5-inch drives. According to their analysis, the new drives would never be competitive with the 5. 5-inch architecture on a cost-per-megabyte basis-an important metric that Seagate's customers used to evaluate disk drives. Given Seagate's cost structure, margins on the higher-capacity 5. 25-inch models therefore promised to be much higher than those on the smaller products. Senior managers quite rationally decided that the 3. 5-inch drive would not provide the sales volume and profit margins that Seagate needed from a new product. A ‘former Seagate marketing executive recalled, â€Å"We needed a new model that could become the next ST412 [a 5. 5-inch drive generating more than $300 million in annual sales, which was nearing the end of its life cycle]. At the time, the entire market for 3. 5-inch drives was less than $50 million. The 3. 5-inch drive just didn't fit the bill- for sales or profits. † The shelving of the 3. 5-inch drive was not a signal that Seagate was complacent about innovation. Seagate subsequently introduced new models of 5. 25-inch drives at an accelerated rate and, in so doing, introduced an impressive array of sustaining technological improvements, even though introducing them rendered a significant portion of its manufacturing capacity obsolete.