Monday, September 30, 2019

Discuss how Baz Luhrman reaches his audience Essay

In this essay I am going to discuss how Baz Luhrman reaches his audience and establishes mood in his film adaptation of Romeo and Juliet. To do this I am going to discuss the difference between the screenplay and Shakespeare’s original text, the genre of the film, the mise-en-scene, lighting, camera shots and soundtrack. Baz Luhrman wanted to reach a teenage audience this is portrayed through clothing, the fast pace action, and the soundtrack. Luhrman may have wanted to reach a teenage audience because there is no other recent film adaptation of Shakespeares’ plays catering for a teenage audience. Baz Luhrman reaches his audience and establishes mood in the opening credits and first scene of his film adaptation of Romeo and Juliet by his use of modernisation of the original text. The genre is communicated to the audience immediately in the opening credits in the screenplay. The prologue from the play is used in the form of a news report. We then hear a voice over that sounds as if he is writing what he is saying. The main points of his speech are shown in the form of newspaper headlines or flashed up on screen. When we hear the voice over stating the prologue his last fatal line is, â€Å"A pair of star-crossed lovers take their life. † This is the last sentence flashed on screen before the audience see the characters picture and their name in a freeze frame. Luhrman could have done this to show the audience who the possible main suspects were for the cause of Romeo and Juliet’s deaths. There are shots of the film shown in quick succession which builds to a climax. In these shots are images of shooting, fast cars and police. These all show conflict, action and death i. e. tragedy. As the film progresses, it shows the audience that there are going to be deaths. Also the operatic music we hear becomes faster and faster. This goes well with the sequence of quick film images helping to create the feeling of tragedy. In the news report there is a picture of a broken wedding ring, this also helps to portray the message of tragedy and heartbreak. In the screenplay Shakespeares’ original text has been adapted to suit the modern audience. This is seen clearly in the first scene at the petrol station. The screenplay shows a Montague biting his thumb at the Capulets, whereas in Shakespeares’ original text, it is a Capulet that bites his thumb at the Montague’s. The roles may have been reversed because the Montague’s seem childish and the Capulets are more serious. The biting of the thumb is an immature thing to do, therefore suiting the Montague’s. In Shakespeares’ text the Capulets are at fault for starting the fight. In this screenplay both the Montague’s and the Capulets are to blame for the fight. Baz Luhrman has adapted the original text in this way because he wanted to show that both families had involvement in the deaths of Romeo and Juliet, and it was not more the Capulets fault then the Montague’s. They were both at fault. To get this message across, Luhrman started at the beginning showing continuity. Also certain lines from Shakespeares’ text have been left out of the screenplay. For example, in the original text a Capulet states, â€Å"Let us take the law of our sides, let them begin. † However this is not included in Baz Luhrmans’ screenplay. This maybe because he wanted to make their actions and statements spontaneous. If he had included that line it would have shown that they have thought about their actions, which could then lead to the Montague’s being the cause of the fight. The film is set in Southern California. The first scene is set in a petrol station; this is so there can be a fire at the end of the scene. The cars both families drive have the first three letters of their family name as the license plate, which would make the audience believe they are wealthy. The Capulets’ car is dark signifying evil, whereas the Montagues’ car is bright revealing their childlike, playful personalities. The Capulet’s guns have their family logo on it and the word ‘sword’ (because it was the term used for gun at that time) as do the Montague’s. The Montague’s are portrayed in quite a ‘laddish’ manner and come across as harmless. They wear bright coloured clothing, have dyed hair, bald heads, fair complexions, clean shaven skin and behave scandalously. They seem more like boys, rather then men and come across as quite laid back and relaxed. These characteristics show their personality. The Capulet’s have a Latino look about them; they have a darker complexion, dark facial hair, and are stylishly dressed. Stereotypical archetypes (dark meaning villainous). They have slick gelled back hair which suggests to the audience that they take pride in their appearance and like to display their wealth. With the Montague’s, they do not seem to care what people think and so do not dress to impress. The Capulet’s also have silver heeled boots and one in particular has a silver cap over his top teeth saying ‘sin’. This shows the Capulet’s hypocrisy because they wear Catholic waistcoats. A better example of the Capulet’s hypocrisy is Tybalt who has a picture of God on his waistcoat and says he hates the word ‘peace’. Because the Capulet’s are conscious of their reputation, they are keen not to be insulted. Tybalt is smoking a cigarette in the petrol station showing rebellious behaviour and a danger to others. Unlike the Montague’s who appear harmless. During the gunfight, the Montague’s continuously fire off target and the Capulet’s shoot accurately and have a stylish handling of their guns, (Tybalt in particular). This along with appearance and behaviour, show the audience aspects of the character and a contrast of personality. There are a variety of camera shots in the opening credits of the film. There is lots of zoom in and out, when words flash up on screen and fast panning. There are high and low angle shots in a rapid sequence which creates visual excitement; it is very dramatic and almost confusing. Luhrman chose to use these types of camera shots because it escalates to a climax and adds to the mood being created. The first scene is top lit, has a quick fiery pace, uses slow motion when Tybalt drops a match and his cigarette. This creates suspense. There are lots of close ups used and one of most significant is the close up of the eyes (Benvolio and Tybalt, highlighting their evil intent) before the gunfight. This shoes intensity. Fast moving cameras make it hard to keep up with the action. This affect has been produced via the editing and helped with the formation of mood. Also a comical effect is created when a woman in a car is hitting a Montague in the head with her handbag. This is to try and relax the atmosphere because the scene is so tense. Luhrman has used a steady camera shot to involve the audience into the movie. This also adds tension as it makes the audience feel as if they are part of the gunfight. The operatic music in the opening credits reaches a climax. In the first scene, the Montague boys have their own introductory music called ‘The Boys’. It is an up beat retro sound revealing their adolescent characters. This caters for the teenage audience Luhrman is trying to reach because it is a modern style of music revealing their adolescent characters. The Capulet’s have a Western type music mirroring a cow boy style to represent their villainous characters. The soundtracks introducing the two families give the audience a sense of their personalities. The sound effects of the screenplay are Western; this creates a Country and Western style atmosphere and tells the audience there is going to be a gunfight. Also the pan pipes (symbolising the whistling of wind) and the creaking of a rusty sign, indicate a gunfight in the making and create a comical effect to relax the intense atmosphere. When the Montague’s and Capulet’s meet there is complete silence suggesting the starting of a gunfight. During the gunfight there is a blend between opera and a western style of music, showing equality at that point. In this essay I have explained how Baz Luhrman has reached his audience and established mood in the opening credits and first scene in his film adaptation of Romeo and Juliet by his modernisation of Shakespeares’ original text. I have done this by discussing the genre, the difference between the screenplay and the text, the mise-en-scene, lighting and camera shots and the soundtrack. Baz Luhrman has made clear changes in his screenplay and has produced a successful modernised film of Romeo and Juliet.

Sunday, September 29, 2019

Trends In Epidemiology Of Hiv Health And Social Care Essay

Zimbabwe has the 3rd largest HIV load in Southern Africa with an estimated 1 million grownups aged 15 and above and 150,000 kids under 15 life with HIV ( 1 ) . Harare, the state in which the capital is located, accounting for largest proportion of people populating with HIV in the state ( merely under 20 % ) and Bulawayo, the state ‘s 2nd largest metropolis accounting for the smallest proportion ( merely over 5 % ) . Zimbabwe has a generalized HIV epidemic, with exceptionally high degree of HIV prevalence in the yesteryear and significantly lower degrees at nowadays. It is estimated that between 1998 and 2010, grownup HIV prevalence has halved from 27.2 % to 14.3 % . ( 2 ) The epidemic in Zimbabwe has contracted faster than any other HIV epidemic in Eastern and Southern Africa as Figure 1 ( 1 ) below illustrates:Figure 1: HIV prevalence curves from East and Southern AfricaThe contraction in HIV prevalence is attributed to really high mortality every bit good as important alterations in sexual behavior ( 1 ) . During the economic system crisis Zimbabwe faced, the wellness system collapsed to widen that most HIV septic persons died due deficiency of antiretroviral drugs and installations for intervention of timeserving infections. In footings of behavior alteration informations from the Population Services International ( PSI ) studies conducted in 2001, 2003, 2005, 2006, and 2007 support this deci sion, particularly with respect to spouse decrease. For work forces 15-29, the proportion describing non-regular spouses fell from 32 % in 2001 to 21 % in 2003, and remained near that degree through subsequently PSI studies. For adult females 15-29, the estimations were for a decrease from 17 % to 8 % in the same period. Zimbabwe is geographically distributed into 10 states. In contrast to other states in the part, the Zimbabwean HIV epidemic is geographically rather homogeneous with similar HIV prevalence degrees across states ( Figure 2 ) . Geographic homogeneousness besides applies when HIV prevalence in rural and urban zones is compared: Rural and urban occupants have similar odds of being HIV infected ( 17.6 % in rural vs. 18.9 % in urban countries ) . There may nevertheless be important heterogeneousness in HIV prevalence at a local degree, as noted in really different degrees of HIV prevalence among Antenatal Clinic clients, with peculiarly high HIV prevalence degrees among those occupant in relocation farms, growing points, main road and boundary line towns ( 3 ) . Figure 2: Adult HIV prevalence by state in Zimbabwe Source: Zimbabwe Demographic Health Survey 2005/6. In Zimbabwe grownup HIV prevalence harmonizing to sex is significantly higher among adult females aged 15-49 ( 21 % ) than among work forces in the same age cohort ( 14.5 % ) ( 4 ) . This gender spread is even wider among immature people. Females aged 15-19 old ages have significantly higher HIV prevalence rates than work forces among the same age group ( Figure 3 ) .The differential between female and male prevalence is big besides in the age groups 20-24, 25-29 and 30-34 old ages reflecting both historical transmittal forms and important degrees of age disparate sexual relationships. The peak age for HIV infection in adult females is 30-34 old ages while for work forces it is the 40-44 old ages age group.Figure 3: HIV prevalence by age and sex in ZimbabweBeginning: 2005/6 ZDHS, Table 14.3 In 2007, an estimated 63,247 grownups acquired HIV. However, in 2009 it is estimated that this figure rose to 66,156 ( about 182 new HIV infections daily ) ( 5 ) . HIV incidence is estimated at 0.85 % in 2009. Projections into the hereafter, based on current HIV prevalence, population growing and antiretroviral therapy use indicate that the figure of freshly infected grownups will go on to turn. Heterosexual sex within unions/regular partnerships histories for the majority of beginnings of new grownup HIV infection in Zimbabwe. Other beginnings of new infections include insouciant heterosexual sex and sex work The UNAIDS Modes of Transmission ( MoT ) theoretical account was used to pattern beginnings of new infections, and overall incidence. The MoT modeling exercising confirmed that heterosexual contact remains the chief manner of transmittal in all countries of Zimbabwe, but this was represented by several different state of affairss including both insouciant and long term partnerships and miscellaneous grades of transactional sexual relationships. Nationally, the theoretical account estimates that the bulk of new infections occur among people in the general community who are non prosecuting in high hazard sexual activities. Persons in this hazard class are in discordant, monogamous relationships of at least a twelvemonth ‘s continuance but frequently longer ( 6 ) . Mother to child transmittal ( MTCT ) continues to stay a important beginning of new infections among babies. Approximately 1 in 3 babies born to HIV septic female parents are infected. HIV infection from an HIV-positive female parent to her kid during gestation, labor, bringing or breastfeeding is called mother-to-child transmittal ( MTCT ) . The per centum of babies born to HIV septic female parents who are HIV infected has remained high averaging 28.5 % between 2006 and 2009. An estimated 15,000 kids were freshly infected with HIV in 2009 ( 5 ) , the huge bulk of them through MTCT.Describe how HIV/AIDS Surveillance informations are collected and sketch the advantages and restrictions of these informations aggregation attacks.The aggregation of informations for HIV prevalence informations is really important for national HIV & A ; AIDS programmes particularly in footings of policy devising. There are several methods used but I will depict Antenatal Clinic Surveillance and Population Based Surveys sketching the advantages and restrictions of each.Antenatal Clinic SurveillanceThe chief intent of surveillance based on adult females go toing prenatal clinics is to measure tendencies in HIV prevalence over clip. However, because other informations beginnings are missing, prenatal clinic surveillance has besides been used to gauge the population degrees of HIV. This is normally based on anon. , unlinked, cross-sectional studies of pregnant adult females go toing prenatal clinics in the public wellness sector. Merely first-time attendants are included to minimise the opportunity of any adult female being included more than one time. Blood is taken routinely from pregnant adult females for diagnostic intents which include poxs, Macaca mulatta and blood grouping. After personal identifiers are removed the blood is tested for HIV. Antenatal clinic studies are normally done yearly at the same clip of the twelvemonth to obtain an estimation of the point prevalence for tha t twelvemonth. The national HIV prevalence of a state is frequently 80 % of the prevalence rate in pregnant adult females go toing prenatal clinics ( 7 ) . Advantages of Antenatal Clinic Surveillance Prenatal clinics provide ready and easy entree to a cross-section of sexually active adult females from the general population who are non utilizing contraceptive method. In generalised epidemics, HIV proving among pregnant adult females is considered a good placeholder for prevalence in the general population ( 7 ) Data for pregnant adult females will reflect the prevalence in groups that may be of higher hazard of infection because of their life agreements ( such as workers who live in inns or ground forces barracks ) if they have regular unprotected sexual contact with adult females in the general population. The restrictions of prenatal surveillance are recognized and acknowledged, and where possible, rectification factors have been developed to get the better of some of the restrictions. In states with low degrees of HIV prevalence, strategically placed lookout sites can supply an early warning for the start of an epidemic. ( 8 ) In recent old ages, many states have expanded the geographical coverage ( the figure and sample sizes of sites ) of lookout surveillance, particularly in rural countries, to better the representativeness of the samples. Restrictions of Antenatal Clinic Surveillance Most sentinel surveillance systems have limited geographical coverage, particularly in smaller and more distant rural countries. Womans go toing prenatal clinics may non be representative of all pregnant adult females because many adult females may non go to prenatal clinics or may go to private clinics. The rate of preventive usage in a state may impact the figure of pregnant adult females. The execution of prenatal clinic-based surveillance varies well between states ( 9 ) . The quality of the studies may change over clip depending on available resources. Antenatal clinic surveillance does non supply information about HIV prevalence in work forces. Because these studies are conducted among pregnant adult females, estimations for work forces are based on premises about the ratio of male-to-female prevalence that are derived from community-based surveies in the part. However, this ratio varies between states and over clip.Population-Based SurveiesThe restrictions of prenatal surveillance systems with regard to geographical coverage, under-representation of rural countries and the absence of informations for work forces have led to an involvement in including HIV proving in national population-based studies. Population-based studies can supply sensible estimations of HIV prevalence for generalised epidemics, where HIV has spread throughout the general population in a state. However, for low-level and concentrated epidemics, these studies will undervalue HIV prevalence, because HIV is concentrated in groups with bad behavior and these gro ups are normally non adequately sampled in household-based studies. Some early studies were designed for unlinked anon. testing, in which the HIV trial consequences could non be linked to persons, whereas more recent studies have incorporated linked anon. testing, in which HIV trial consequences can be linked to behavioral informations without uncovering the individuality of any person who has been tested. Advantages of Population Based Surveies: – In generalised epidemics, population-based studies can supply representative estimations of HIV prevalence for the general population every bit good as for different subgroups, such as urban and rural countries, adult females and work forces, age groups and part or state ( 8 ) . The consequences from population-based studies can be used to set the estimations obtained from sentinel surveillance systems. Population-based studies provide an chance to associate HIV position with societal, behavioral and other biomedical information, therefore enabling research workers to analyze the kineticss of the epidemic in more item. Information from this analysis could take to better plan design and planning. Restrictions of Population Based Surveys. In population-based studies, trying from families may non adequately represent high-risk and nomadic populations. In low-level or concentrated epidemics, population-based studies hence underestimate HIV prevalence. Nonresponse ( either through refusal to take part or absence from the family at the clip of the study ) can bias population-based estimations of HIV. ( Roll uping information on nonresponders can assist in the procedure of seting for nonresponse. ) Population-based studies are expensive and logistically hard to transport out and can non be conducted often. Typically, these studies are conducted every 5-10 old ages ( 8 ) .Outline the major factors doing spread of HIV/AIDS in the community where you live or work.The followers are some the factors which have been attributed to distribute of HIV in Zimbabwe Multiple Concurrent Partners ( MCP ) is by and large defined as a sexual behavior characterised by holding more than one sexual spouse in the same clip period. Zimbabwean work forces are more likely to hold multiple spouses than adult females. Harmonizing to the Zimbabwe Demographic Health Surveys 2005-6 ( ZDHS -2005-6 ) , 1 in 10 adult females and 1 in 3 work forces aged 15-49 old ages who had sex in the 12 months predating the study had sex with two or more spouses. Low and inconsistent degrees of rubber usage, particularly among married twosomes. There is by and large a low degree of rubber usage in Zimbabwe, although the more insouciant the sexual brush, the more likely that a rubber is used due to increased hazard perceptual experience. Harmonizing to the ZDHS ( 2005-6 ) , rubber usage is last amongst married twosomes and those with long-run spouses with merely 3.6 % of married adult females and 7.7 % of work forces describing utilizing rubbers the last clip they had sex with a partner or cohabiting spouse. Harmonizing to a survey by SAFAIDS about 52 % of all new infections which occurred in 2009 occurred among married people which makes the matrimony a hazard brotherhood. Low Levels of Male Circumcision: Male Circumcision is one of the best ways that has been seen to forestall HIV transmittal by about 60 % harmonizing to three surveies carried out in the different states in Africa: – Rakai, Uganda ( 10 ) ; Kisumu, Kenya ( 11 ) and Orange Farm, South Africa ( 12 ) . Male Circumcision has been seen to work through the undermentioned mechanisms: – Decrease of surface country by taking the prepuce which has seen to advance entry of HIV virus. Hardening of open glans penis therefore cut downing scratchs and hazard of HIV incursion. The removed prepuce agencies, HIV can no longer be trapped underneath therefore minimising transmittal. However male Circumcision in Zimbabwe remains low with 10.5 % of work forces aged 15-54 coverage being circumcised in the 2005/6 DHS. Such a low degree is improbable to impact overall HIV transmittal to any of import grade. In Zimbabwe, harmonizing to mathematical modeling ( Figure 4 ) , the figure of new HIV infections will drop significantly if male Circumcision services are expanded. The modeling is assuring and what needs to be done is to supply more consciousness and still negative attitudes.Figure 4: – Zimbabwe Projected New Infections Cases with Male CircumcisionAdapted from a presentation by Karin Hartzold, PSI, Zimbabwe, 2010 Age disparate sexual relationships: Surveies indicate that relationships between immature adult females and older work forces are common and tolerated in Zimbabwe as in many parts of sub-Saharan Africa and are associated with insecure sexual behavior and increased HIV hazard as informations from the 2005-6 ZDHS indicates. In such relationships rubbers use tends to be selectively and strategically and such use additions HIV hazard. High degrees of Sexually Transmitted Infections: Sexual transmitted Infections increase the hazard of HIV infection. This hazard is much higher with ulcerating infections like pox and herpes simplex. The prevalence of sexually transmitted infections in Zimbabwe is really high and this has been lending a high prevalence rate. In Zimbabwe the 2009 ANC Sentinel Surveillance Report showed that adult females with current or past venereal ulcer disease ( GUD ) had about three times the HIV prevalence of adult females without a history of GUD. Among immature ANCs aged 15-24, those with GUD had a HIV prevalence of 31 % . This is corroborated by ZDHS 2005-6 that found that work forces and adult females who reported a recent STI were significantly more likely to be HIV positive, harmonizing to the 2005/6 DHS. 40 % of adult females who reported holding had an STI or STI symptoms in the old 12 months were HIV-infected, compared to 24 % who did non describe an STI or STI symptom. For work forces, the corresponding HIV prevalence figures were 32 % and 18 % . Other factors: – though the above factors are the taking 1s in footings of distributing HIV & A ; AIDS in Zimbabwe other factors like poorness, migratory labour systems with household breaks, commercial sex workers, low position of adult females due to gender favoritism and male laterality still play a important proportion in footings of advancing HIV transmittal.

Saturday, September 28, 2019

Evolution of modern dance Personal Statement Example | Topics and Well Written Essays - 500 words

Evolution of modern dance - Personal Statement Example The movements are put into â€Å"bits† just the same way we communicate through language. The whole dance art is a creative process in which life experience plays a critical role. The feelings of the audience and the aesthetic responses are what choreographers tend to be so sensitive. The process of creativity within the context of dancing is a showcase of a sense of personal growth and discovery, that is, the discovery because of sub-conscious. America grew up with dance. The American dance continues to be a barometer of life among the Americans. However, it from the streets to the stage, dance in America was capturing everyday gestures, cultural retentions, social dances, spiritual principles, and socio-political issues. These sources incorporation with the spirit of risk-taking, persistence, exploration, and independence have been the benchmark through the formation of what we today known as the American modern dance. The American modern dance has emerged into diverse movement vocabularies, social and cultural concerns, and individual choreographic impulses- the American modern dance is an irreplaceable national treasure and touchstone. Since the inception of the American modern dance, it has been a cultural mainstay at home and a crucial ambassador of American culture abroad. The development of the genre of dance has been through a chain of succession as different generations build on the work of, or rather rebel against, their mentors, creating a lineage marked with innovation and also radicalism. The definition of modern dance cannot be neatly reached to, but as the history tells, it is not a style parse but a continually evolving pursuit to share and discover the expressive potential of human movement. For the choreographers who practice this contemporary dance genre use unique movements, innovate, techniques, shapes, and gestures to suit the dynamics in the intentions of modern dance. Modern at times incorporates the theatrical texts and

Friday, September 27, 2019

Capital Investment Thesis Proposal Example | Topics and Well Written Essays - 3000 words

Capital Investment - Thesis Proposal Example The capital investment project related to health that is selected for this paper is expansion and renovation of a diagnostic and imaging department. Effective imaging services in an emergency department begin by having enough space to cater for the high number of emergency cases. The imaging departments are known to offer a wide range of services and thus implying that they expect a high number of patients. Their services can be used in the treatment of different diseases, and injuries (Colchester East Hants Health Authority, 2014). The expansion diagnostic imaging department will be vital in creating enough space for emergency imaging services and providing enough room for new CT scanners. The room will be helpful in establishing modern environment in diagnostic imaging and ensure current standards in this emergency department are addressed. Creation of more space will also be helpful in ensuring that the issue of transferring inpatients to other hospitals is reduced. Third, there will be control and prevention of infections. The expansion will also lead to a new work environment, which will help recruit new emergency specialists and physicians. It will encourage a patient centred experience. Furthermore, there will be an opportunity to ensure that the emergency department for imaging responds to the community increasing needs. Last, it will help develop space for an ambulatory clinic. In funding capital expenditures, there are multiple sources that can be adopted (Sullivan & Steven, 2005. The source used will depend on the needs of the organization and the existence of other projects. In funding for this project, there can be an advance planning so that its funding can be considered in the coming fiscal year. In this regard, the hospital can decide and set some amount that will cater for the project for a given in time. In addition, the funds from this source can be used to supplement

Thursday, September 26, 2019

Making an Ethical Decision Essay Example | Topics and Well Written Essays - 750 words

Making an Ethical Decision - Essay Example Mary and her colleagues should not have agreed with the decision because considering individual circumstances, 6 months are not an unfair amount of time to expect to be out on maternity leave. This is because, among other reasons, it is in the best interest of their children for them to take longer leave. According to a study carried out by the economic journal in 2005, children of American women who resumed work within 12 weeks were more likely to have cognitive and behavioral issues (Lerner 2011).This is because despite the fact the fact most mothers will have physically recovered from the childbirth in under this time. There are a myriad of psychological factors that need to be factored in such as time spent with the children. There is at present, sufficient medical evidence to indicate that a mother spending short periods of time with their newborns is a leading contributor of infant disorders and even death (Lerner 2011). While giving brief leave may appear to make economic sens e to employers in terms of increasing the mother’s time at work, ergo, productivity at closer scrutiny, it is likely to be counterproductive. This is because nursing mothers who are also working are more prone to stress depression and frustration (Melnick 2011) and as such may not be the best of workers. Pursuant to management’s turn down, there are several avenues the employees can consider some more radical than others. One cause of action would be to try to renegotiate with management and, attempt to come up with a compromise. This could involve offering to shortening their time off or even providing statistical evidence. That is if any can be found, that women given time off will ultimately be better workers. Alternatively they could take activists routes and through their union take assertive actions like striking or suing management. The situation has several stake holders primarily of course, are the women employees particularly those who plan on having children in posterity. Nonetheless, the whole firm including management and fellow workers as long clients has a stake in this. This is because if the women’s claims are correct, the productivity of the company would go up in the long run if their demands are met. In case they decide to go on strike, the whole firm and clients will be affected directly or indirectly. Furthermore, other companies in the area and/or country will be affected since not many companies give their employees that much time off with the average time being less than 3 months, which is the amount of time the national Family Leave Law allocates (Lerner 2011). In case Mary’s efforts are successful it would spur others to follow suit and as such a large part of the labor industry may feel the impact. In fact, Melnick posits that six months is ideal because according to research after about this period, work did not translate to poor parenting (Para 3). Management could try to settle the problems in several ways. Herein, two significant ones will be considered. For one, they could like it had been suggested of the women try to come to some compromise, this way they could offer an increment of time off to what they consider reasonable and try to convince them to take the deal. Alternately, management could avail daycare facilities within the premises of the firm for nursing mother’s time off with their kids. While management may find this an expensive venture, assuming they rejected the initial petition for financial

Wednesday, September 25, 2019

The common drivers contributing to employees satisfaction at late Research Proposal

The common drivers contributing to employees satisfaction at late career stage - Research Proposal Example It is therefore critical for organizations to understand and outline what actually motivates employees working at the later stage of their career. This research study will focus on understanding and exploring as to what actually motivates employees working at the later stage of their careers. Employee motivation has been one of the hotly debated and discussed topics in academic literature with many theoretical underpinnings outlining as to what actually motivates an employee throughout his or her career. From Maslow’s theory of hierarchy to latest research on the topic indicates the overall importance of motivation within an organization. It is however, critical to note that every organization contains a mix of employees belonging to different age groups and career stages. Motivation drivers for employees working at three different stages of the career therefore may be different as compared to the motivators for employees at the early or mid-stage career. (Dwyer, 2009) Each employee pass through three different phases of career i.e. start, mid and later stage and at each stage, the level of motivation and motivators change because employee needs change with the passage of time. As a person ascends on the hierarchy of needs, the nature of motivators change and employees look for different and unique ways to get themselves motivated and generate the level of job satisfaction required to retain the job. The overall research problem is based upon understanding and exploring as to what are the key and common drivers of motivation and job satisfaction for employees working at the later stage of their career. As outlined above, the motivation and education needs of employees at three different stages of their career are different. The overall research objective therefore is to explore and assess as to what motivates employees who are at the later

Tuesday, September 24, 2019

Tour Operations Management Essay Example | Topics and Well Written Essays - 2500 words

Tour Operations Management - Essay Example The most important aspects of a holiday usually coordinated by a tour operator include the type of travel, transfers, excursions, facilities among other services. One easiest way to distinguish tour operators from other practitioners like travel agents is by establishing their form and features. A tour operator will, for this reason, bring together various subsets of tourism experience and offer it as a package. A package offered by tour operators is usually referred to inclusive tour. Inclusive tour mostly includes at least two elements often offered at an inclusive sale price and will encompass a stay of move for more than twenty-four hours in overnight accommodation. These elements range from transportation, foods, accommodation to other tourist services. The kind and variety of packages in a given market is mostly categorized into two categories, that is, those that use the traditional charter flight and those using booked flights. Booked flights are mainly used when it is consid ered uneconomic for tour operator to purchase charter flights. The types of package in a tour operators industry is also often categorized according to a mode of travel or mode of accommodation (Chauhan, 2009). In the case of mode of travel, the package involves issues like coach holiday or ferry. Mode of travel can also be based on ion twin transport packages like fly-drive, which are mostly popular with inbound tourist in the United States of America (Negi, 2006). Segmentation by mode of accommodation on the other hand is where hotels chains assumes the role of tour operators by packaging their excess capacity to offer weekends or short breaks in business attractions as in the case of inclusive package. An inclusive tour can also be segmented according to whether they are domestic or international, according to the length of the holiday, distance and destination type (Gupta, 2012).

Monday, September 23, 2019

The challenges of repaying a student loan Essay

The challenges of repaying a student loan - Essay Example As serious as this information may look, these account still falls short in many ways and may not be a true representation of the present problem. Such is the case that Data collected and reported on student loan repayment cannot paint the right picture of the debt’s effects on economy (Suze Ormans). For instance, let’s consider the data showing how many people are presently struggling with payment of their student loans. Presently such statistic is measured by mere prediction, a factor that openly shows how students struggle with loan repayment. Based on the information gathered till this point, it is obvious that much attention on loan repayment burden is needed. This may be explained basing on the following reasons. To begin with, more students today are borrowing large sums of money for their collage than before. Close to 2/3 of baccalaureate recipients today graduate with loan burden and their burden has greatly risen in the past decade on account of inflation. It is true that borrowing will always increase even with a gradual rise in tuition level. Second, borrower’s payment rate is continually rising as interest rates increase. A good example is the federal consolidation loan availed in 2004-05. This loan earned interest at a rate of 3% or even below for some students, a factor that reprieved many borrowers known to be struggling with paying back. Borrowers today have no such privileges; instead, a 6.8% interest rate was introduced on federal student loans. This change meant that the loan repayment would take 10 years and earn 20% more compared to previous years (Dept. of Education, Office of Student Financial Assistance, pg 45). Additionally, this change meant that some students would be left out because of the aggregate loan restrictions in the loan programs offered by federal government leaving them with no other option but to borrow the expensive private loans. The third reason closely relates to

Sunday, September 22, 2019

KT boundary Essay Example | Topics and Well Written Essays - 500 words

KT boundary - Essay Example This unique layer in terms of its contents and age is believed to have been created sixty five million years ago. Geologists believe this demarcating line to be a clue to the extinction of dinosaurs and the spread of mammals on the earth. The contents of this layer are mainly clay in the bottom and in the upper layer it has got mixtures of minerals like quarts and broken pieces of prismatic crystal substances. The presence of iridium in this layer is very high. Supporting the theory of the mass impact as the presence of it is very less in earth whereas it is abundant in asteroids. The thickness of KT Boundary: Panelists unanimously agree that there is variation in the thickness of the layer. It becomes thinner in Northern America and Canada. But in Italy it is just one centimeter in thickness. The panelists observe that the line gets thinner when it moves from Southern American states into the Northern American states to Canada. The thickness of the layer is three centimeters in America whereas it is only 1 cm thick in Italy owing to the climate impacts on the boundary. KT Boundary and the Extinction of Dinosaurs: The most scientists believe that dinosaurs became extinct because of a single catastrophic event; a massive asteroid impact and gradually the earth witnessed the spread of mammals. The fall of the asteroid led to a kind of situation on the atmosphere where there was no sunlight and thus there was no process of photosynthesis which made a greater crack in the food chain which led to massive extinction. As a result, comparatively bigger animals got extinct whereas mammals and other minor organisms could survive as they were able to hibernate .Scientists are of the opinion that dinosaurs wanted a large area to live in and they could not withstand the impact of asteroid fall and the aftereffects.. In the talk, panelists tell us of crocodiles and turtles who could live underneath the water after the mass impact. Alternate Theory: Melvyn Brag after

Saturday, September 21, 2019

Structuralism Pleasantville Essay Example for Free

Structuralism Pleasantville Essay Semiology telling a deeper tale†¦ Pleasantville may not be so pleasant after all In the film Pleasantville, David is obsessed with the 50’s sitcom Pleasantville. He uses this show as an excuse to escape from the harsh reality he is forced to deal with everyday. In relevance to society†¦ if Pleasantville acted as a religious allusion, could humanity be turning to religion to provide them with a light in the dark when the going gets tough? Just as David looks to this unrealistic TV show to escape from the darkness surrounding his family, high school and teenage years? How is it that elements of a plot such as symbolism and allusions can hide the fact that Pleasantville may not be so pleasant after all? The main element in the structuralist criticism is semiology; the film Pleasantville has many subtle themes and meanings camouflaged by allusions and signifiers. This film takes many elements of religion, controversy and censorship into consideration; the film demonstrates these themes with symbols and allusions directly related to historical events that have been learned about for generations. In the upcoming paragraphs, these symbols, themes and meanings will be thoroughly discussed. The idea of religion, mainly Christianity was present throughout the film. The aspect of Christianity was supported by references to historical events and biblical ideas. For instance, when we are introduced to Pleasantville, the town seems to be perfect, as if nothing could go wrong: wrong is unheard of. An example from the film would be when the Pleasantville basketball team simply couldn’t miss a shot, it just wasn’t possible; or when the Pleasantville firefighters are called to rescue cats from trees, because that is ‘in the town’s reality’, one of the only problems needing attention from emergency personnel. Right from the beginning the viewer feels the unsettling religious connection from the bible stories he or she may remember as an innocent child. This place, Pleasantville was in theory the Garden of Eden. This phenomenon becomes quite clear to the viewer when he or she recognise the first colour change within Pleasantville, something as simple as a flower, triggered by change, knowledge of good and evil, emotion and freewill, or in theory sin. Mary sue demonstrates sexual freedom, which she is unaware that ‘hooking up with boys’ is not allowed in this town. From this act of showing emotion, and changing the so-called ‘normal’ or ‘unharmed’ way of life is Pleasantville, Mary sue begins the cycle of change and/ or sin; which will continue as a constant theme in the film. Throughout the beginning of the film David tires to contain Pleasantville’s innocence by encouraging his sister Jennifer, and everyone around him to be naive to reality, and to avoid thinking outside the box; David does not want the only pleasant place left in his own life to be spoiled by reality. David’s approach begins to change at a crucial moment in the film. When David (Bud) takes Jennifer on a date, she offers him an apple; this poses as the driving force of evil (or otherwise freewill and knowledge). Bud acknowledges this moment and realizes it is time to accept the change in Pleasantville, and maybe being naive to reality isn’t such a pleasant thing after all. Could change really be that evil? As we can clearly see towards the middle of the film, the characters in Pleasantville are becoming oddly familiar, as if they are also from out childhood bible stories. At this point it becomes quite obvious that we have assigned biblical figures to certain characters in the film. First of all, Mary sue is seemingly the most recognizable comparison. Mary Sue invites sin into Pleasantville as she visits blank at lovers blank. This compares to Eve eating the apple in the Garden of Eden and committing the first sin among humanity, therefore beginning the cycle of knowledge of good and evil and temptation mankind has faced ever since. It is obvious that David is Adam and he eventually stops trying to hold Mary sue back and gives into temptation, just as Adam and eve did in the bible. The next character could potentially be difficult to make a connection with. Bill Johnson who owns the diner is a huge force of change in the film. He is has the biggest influence on Pleasantville next to Jennifer and David. Bill paints the Christmas mural every year in Pleasantville, and has the opportunity to tap into his thoughts and beliefs. This could be what triggers him to be such an influence on the community during the time of drastic change. Bill is the first man open to change, he learns how to handle the diner by himself, and he embraces it. This triggers his trapped freewill to be released. When bill becomes more comfortable with his newfound sense of freedom, he begins to pain in color releasing new emotions and in turn, Bill lets himself fall in love. Bills character could represent the progress humanity makes to set themselves free from their belief system and thinking outside the box; Bill encourages this. It is ironic how such a quiet man’s thoughts could cause such a huge impact, as well as symbolize a step forward for mankind. The last character allusion that would most likely not be picked up on just by watching the film for enjoyment is the repairman. The repairmen could doubtlessly play the role of a higher power in Pleasantville; an omniscient force who could be compared to god himself. The repairmen is the one who sent Jennifer and David to Pleasantville in the first place, just as god put Adam and eve on earth to live, and abide by his rules. David and Jennifer indubitably disobeyed the repairman’s orders, after the repairman trusted David to be in his paradise because of David’s excessive knowledge of Pleasantville and how things work around there. The repairman continued to show up on televisions in Pleasantville telling David and Jennifer basically to smarten up, just as god warned Adam and eve to repent from in, as explained in the bible. Although characters helped the viewer relate to the theme of the film, there were also very prominent allusions to renowned historical events and controversial literature. Along with religion, controversial literature and events in history were involved to help release freewill and open minds in Pleasantville. This film shows somewhat the progress of humanity through history from the time Adam and eve first introduced sin into the world. Many of the conflicts in the film came from this idea. To begin, towards the end of the film as freewill and color spread quickly through Pleasantville, there remained a group of stubborn people who could not comprehend the idea of change (as there always is in history). These people in Pleasantville began to burn coloured books filled with information that encouraged freewill and open-minded thinking. This scene in the film is identical to a situation that took place in history when religious people were desperately trying to contain purity and innocence by abstaining to read about things that were not in the interest of god. This shows us that if everyone is history had have been as open to change as bud and bill Johnson had been. Certain conflicts wouldn’t have arisen. Another allusion to history is the famous courtroom scene that is shockingly similar to the trial that took place in ‘the kill a mocking bird’ by Harper Lee. This scene in Pleasantville demonstrates how the idea of freewill and diversity was being oppressed by stubborn people who were absolutely oppose to change. In ‘to kill a mocking bird’ we see history moving forward with the help of Atticus finch defending a black man. This same theme applies here as David and Bill Johnson, as well as other coloured Pleasantville citizens encourage change for the better. One of the Last allusions in the film was very broad and has occurred in history repeatedly. This idea was that painting and artistic expression was being oppressed in Pleasantville just as it was in the western world for countless years for the same reason as the burning books and the courtroom trial. It is human nature for people to become anxious and unsettled when it comes to change; citizens of Pleasantville became upset when artwork appeared around the city because seeing something so controversial was extremely nerve wracking. The small mentions of other controversial literature such as Moby dick, of mice and men and lord of the flies painted into the artwork also gave viewers the sense history repeating itself. Ultimately, the signifiers in this film were very clear however as an analyst of the structuralist perspective it was much more difficult to find the meaning of each allusion in the film. Structuralism’s main analytical element is semiology. Pleasantville’s many hidden themes and meanings can be revealed through symbolism and historical, religious allusions. After all, the viewer could combine the semiology to form a theme interpreted as so†¦Jennifer and David played the role of Adam and eve in the Garden of Eden; they are placed on the flawless earth (Pleasantville) with the knowledge of good and evil and the gift of freewill. In the bible, Adam and eve take advantage of this and commit sin against god by doing wrong in Eden. Many Christians believe this is the reasoning for all evil on earth. However in Pleasantville this could be considered a step forward for humanity, discovering things. The real question after watching this film, is religion holding us back? Is religion the phenomenon that could be causing humanity to continue repeating history and constantly making the same mistakes? The film Pleasantville really makes you question humanity and how it interferes and intertwines with religion and a higher power. Will history keep repeating itself until mankind finally gets it right? Or will we continue to learn from our repetitive sins and always end up in the same spot history seems to keep throwing us into, Pleasantville.

Friday, September 20, 2019

Load Balancing as an Optimization Problem: GSO Solution

Load Balancing as an Optimization Problem: GSO Solution METHODOLOGY INTRODUCTION In this chapter, we presented a novel methodology which considers load balancing as an optimization problem. A stochastic approach, Glowworm swarm optimization (GSO) is employed to solve the above mentioned optimization problem. In the proposed method, excellent features of various existing load balancing algorithms as discussed chapter 2 are also integrated. PROPOSED METHODOLOGY There are numerous cloud computing categories. This work mainly focuses on a public cloud. A public cloud is based on the typical cloud computing model, and its services provided by service provider [42]. A public cloud will comprises of several nodes and the nodes are in different physical locations. Cloud is partitioned to manage this large cloud. A cloud consists of several cloud partition with each partition having its own load balancer and there is a main controller which manage all these partition. 3.2.1 Job Assignment Strategy Algorithm for assigning the jobs to cloud partition as shown in Fig. 2 Step 1: jobs arrive at the main controller Step 2: choosing the cloud partition Step 3: if cloud partition state is idle or normal state then Step 4: jobs arrive at the cloud partition balancer. Step 5: assigning the jobs to particular nodes based on the strategy. Figure 3.1: Flowchart of Proposed Job Assignment Strategy. Load Balancing Strategy In cloud, Load Balancing is a technique to allocate workload over one or more servers, network boundary, hard drives, or other total resources. Representative datacenter implementations depends on massive, significant computing hardware and network communications, which are subject to the common risks linked with any physical device, including hardware failure, power interruptions and resource limits in case of high demand. High-quality of load balance will increase the performance of the entire cloud.Though, there is no general procedure that can work in all possible different conditions. There are several method have been employed to solve existing problem. Each specific method has its merit in a specific area but not in all circumstances. Hence, proposed model combines various methods and interchanges between appropriate load balance methods as per system status. Here, the idle status uses an Fuzzy Logic while the normal status uses a global swarm optimization based load balancing strategy. Load Balancing using Fuzzy Logic When the status of cloud partition is idle, several computing resources are free and comparatively few jobs are receiving. In these circumstances, this cloud partition has the capability to process jobs as fast as possible so an effortless load balancing method can be used. Zadeh [12] proposed a fuzzy set theory in which the set boundaries were not precisely defined, but in fact boundaries were gradational. Such a set is characterized by continuum of grades of membership function which allocates to each object a membership grade ranging from zero to one [12]. A new load balancing algorithm based on Fuzzy Logic in Virtualized environment of cloud computing is implemented to achieve better processing and response time. The load balancing algorithm is implemented before it outstretch the processing servers the job is programmed based on various input parameters like assigned load of Virtual Machine (VM) and processor speed. It contains the information in each Virtual machine (VM) and numbers of request currently assigned to VM of the system. Therefore, It recognize the least loaded machine, when a user request come to process its job then it identified the first least loaded machine and process user request but in case of more than one least loaded machine available, In that case, we tried to implement the new Fuzzy logic based load balancing technique, where the fuzzy logic is very natural like human language by which we can formulate the load balancing problem. The fuzzification process is carried out by fuzzifier that transforms two types of input data like assigned load and processor speed of Virtual Machine (VM) and one output as balanced load which are required in the inference system shown in figure 3.2, figure 3.3 and figure 3.4 respectively. By evaluating the load and processor speed in virtual machine in our proposed work like two input parameters to produce the better value to equalize the load in cloud environment, fuzzy logic is used. These parameters are taken for inputs to the fuzzifier, which are needed to estimate the balanced load as output as shown in figure 3.4. Figure 3.2: Membership input function of Processor Speed Figure 3.3: Membership input function of Assigned Load Figure 3.3: Membership output function of Balanced Load To affiliate the outputs of the inferential rules [13] , low-high inference method is employed. A number of IF-THEN rules are determined by making use of the rule-based fuzzy logic to get the output response with given input conditions, here the rule is comprised from a set of semantic control rules and the supporting control objectives in the system. If (processor_speed is low) and (assigned_load is least) then (balanced_load is medium) If (processor_speed is low) and (assigned_load is medium) then (balanced_load is low) If (processor_speed is low) and (assigned_load is high) then (balanced_load is low) If (processor_speed is Medium) and (assigned_load is least) then (balanced_load is high) If (processor_speed is Medium) and (assigned_load is medium) then (balanced_load is medium) If (processor_speed is Medium) and (assigned_load is high) then (balanced_load is low) If (processor_speed is high) and (assigned_load is least) then (balanced_load is high) If (processor_speed is high) and (assigned_load is medium) then (balanced_load is medium) If (processor_speed is high) and (assigned_load is high) then (balanced_load is medium) If (processor_speed is very_high) and (assigned_load is least) then (balanced_load is high) If (processor_speed is very_high) and (assigned_load is medium) then (balanced_load is high) If (processor_speed is very_high) and (assigned_load is high) then (balanced_load is medium) As shown above, there are 12 potential logical output response conclusions in our proposed work. The Defuzzification is the method of changing fuzzy output set into a single value and the smallest of minimum (SOM) procedure is employed for the defuzzification. The total sum of a fuzzy set comprises a range of output values that are defuzzified in order to decode a single output value. Defuzzifier embraces the accumulated semantic values from the latent fuzzy control action and produces a non-fuzzy control output, which enacts the balanced load associated to load conditions. The defuzzification process is used to evaluate the membership function for the accumulated output. The algorithm-1 is defined to manage the load in Virtual machine of cloud computing as follows: Begin Request_to_resource() L1 If (resource free) Begin Estimate connection_string() Select fuzzy_rulebase() Return resource End Else Begin If (Anymore resource found) Select_next_resource() Go to L1 Else Exit End End The proposed algorithm starts with request a connection to resource. It tests for availability of resource. It Calculate the connection strength if the resource found. Then select the connection, which is used to access the resource as per processor speed and load in virtual machine using fuzzy logic. Load Balancing using GSO (Glowworm Swarm Optimization) When the status of cloud partition is normal, tasks arrives with faster rate compare to idle state and the condition becomes more complex, thus a novel strategy is deployed for load balancing. Each user desired his job in the shortest time; as a result the public cloud requires a strategy that can finish the job of all users with adequate response. In this optimization algorithm, each glowworm i is distributed in the objective function definition space [14]. These glowworms transfer own luciferin values and have the respective scope called local-decision range . As the glow searches in the local-decision range for the neighbor set, in the neighbor set, glow attracted to the neighbor with brightest glow. That is glow selects neighbor whose luciferin value greater than its own, and the flight direction will change each time different will change with change in selected neighbor. Each glowworm encodes the object function value at its current location into luciferin value and advertises the same within its neighborhood. The neighbor’s set of glowworm comprises of those glowworms that have comparatively a higher luciferin value and that are situated within a dynamic decision range and their movements are updated by equation (8) at each iteration. Local-decision range update: (8) and is the glowworm local-decision range at the iteration, is the sensor range, is the neighbourhood threshold, the parameter generates the rate of change of the neighborhood range. Local-decision range consist of the following number of glow: (9) and, is the glowworm position at the t iteration, is the glowworm luciferin at the iteration.; the set of neighbours of glowworm comprises of those glowworms that have a comparatively higher luciferin value and that are situated within a dynamic decision range whose range is defined above by a circular sensor range Each glowworm as given in equation (10), i elects a neighbor j with a probability and process toward it as: Probability distribution used to select a neighbor: (10) Movement update: (11) Luciferin-update: (12) and is a luciferin value of glowworm at each iteration, leads to the reflection of the accumulative goodness of the path . This path is followed by the glowworms in their ongoing luciferin values, the parameter only ascends the function fitness values, is the value of test function. In this optimization algorithm, each glowworm is distributed in the objective function definition space [43]. These glowworms transfer own luciferin values and have the respective scope called local-decision range . As the glow searches in the local-decision range for the neighbor set, in the neighbor set, glow attracted to the neighbor with brightest glow. That is glow selects neighbor whose luciferin value greater than its own, and the flight direction will change each time different will change with change in selected neighbor. Figure 3.4 shows the flowchart of GSO algorithm. In the context of load balancing for cloud computing GSO algorithm check the status of the server simultaneously if it is free. For example a user wants to download a file size of 50 MB. It checks by iteration if user gets entered in server, it gets the message as achieve target. Figure 3.4: Flowchart of GSO Analysis of the Accrual Anomaly | Accounting Dissertation Analysis of the Accrual Anomaly | Accounting Dissertation Sloan (1996), in a determinative paper, added the accrual anomaly in the list of the market imperfections. Since then, academics have focused on the empirical investigation of the anomaly and the connection it has with other misspricing phenomena. The accrual anomaly is still at an embryonic stage and further research is needed to confirm the profitability of an accruals based strategy net of transaction costs. The current study investigates the accrual anomaly while taking into consideration a UK sample from 1991 to 2008. In addition, the predictive power of the Fama and French (1996) factors HML and SMB is being tested along with the industrial production growth, the dividend yield and the term structure of the interest rates. Chapter 1 Introduction Since the introduction of the random walk theory which formed the basis for the evolvement of the Efficient Market Hypothesis (EMH hereafter) proposed by Fama (1965), the financial literature has made many advances but a piece of the puzzle that is still missing is whether the EMH holds. Undoubtedly, the aforementioned debate can be considered as one of the most fruitful and fast progressing financial debates of the last two decades. The Efficient Market Hypothesis has met many challenges regardless of which of its three forms are being investigated. However, the weak form and semi strong hypothesis have been the most controversial. A literally vast collection of academic papers discuss, explore and argue for phenomena that seem to reject that the financial markets are efficient. The famous label of â€Å"anomaly† has taken several forms. Many well-known anomalies such as the contrarian investment, the post announcement drift, the accruals anomaly and many others are just the beginning of an endless trip. There is absolutely no doubt that many more are going to be introduced and evidence for the ability for the investors to earn abnormal returns will be documented. Recently, academics try to expand their investigation on whether these well-documented anomalies are actually profitable due to several limitations (transaction costs etc) and whether the anomalies are connected. Many papers are exploring the connection of the anomalies with each other proposing the existence of a â€Å"principal† misspricing that is documented into several forms. The current study tries to look into the anomaly that was initially documented by Sloan (1996) and has been labelled as the â€Å"accrual anomaly†. The accrual anomaly can be characterised as being at an embryonic stage if the basis for comparison is the amount of publications and the dimensions of the anomaly that light has been shed on. The facts for the accrual anomaly suggest the existence of the opportunity for investors to earn abnormal returns by taking advantage of simple publicly available information. On the other hand, accruals comprising an accounting figure have been approached from different points of view with consequences visible in the results of the academic papers. Furthermore, Stark et al (2009) challenge the actual profitability of the accrual anomaly by simply taking transaction costs into consideration. The present paper employs an accrual strategy for a sample comprising of UK firms during 1991-2008. The aim is to empirically investigate the profitability of such strategies during the whole data sample. The methodology for the calculation of accruals is largely based on the paper of Hardouvelis et al (2009). Stark et al (2009) propose that the positive excess returns of the accruals’ strategy are based on the profitability of small stock. In order to investigate the aforementioned claim, the current study employs an additional strategy by constructing intersecting portfolios based on accruals and size. Finally, five variables are being investigating at the second part of the study for their predictive power on the excess returns of the constructed portfolios. The monumental paper of Fama and French (1996) documented an impressive performance of two constructed variables (the returns of portfolios named HML and SMB). In addition, the dividend yield of the FTSE all share index, the industrial production growth and the term structure of the interest rates will be investigated as they are considered as potential candidates for the prediction of stock returns. Chapter 2 Literature review 2.1. Introduction During the last century the financial world has offered many substantial advances. From the Portfolio Theory of Markowitz (1952) to the development of the Capital Asset Pricing Model of Sharpe (1964) and Lintner (1965), and from the market Efficient Market Hypothesis (hereafter EMH), developed by Fama (1965), to the recent literature that challenges both the CAPM and the EMH, they all seem to be a chain reaction.   The financial academic world aims to give difficult but important answers on whether markets are efficient and on how investors should allocate their funds. During the last two decades, many researchers have documented that there exist strategies that challenge the claim of the supporters of the efficient and complete markets. In this chapter, the effort will be focused on reviewing the financial literature from the birth of the idea of the EMH until the recent publications that confirm, reject or challenge it. In a determinative paper, Fama (1970) defined efficient markets and categorised them according to the type of information used by investors. Since then, the finance literature has offered a plethora of studies that aim to test or prove whether markets are indeed efficient or not. Well known anomalies such as the post announcement drift, the value-growth anomaly or the accruals anomaly have been the theme of many articles ever since. 2.2. Review of the value-growth anomaly We consider as helpful to review the literature for the value growth-anomaly since it was one of the first anomalies to be investigated in such an extent. In addition, the research for the value-growth anomaly has yielded a largely productive debate on whether the documented returns are due to higher risk or other source of mispricing. Basu (1970) concluded that stocks with high Earnings to Price ratio tend to outperform stocks with low E/P. Lakonishok, Shleifer and Vishny (1994) documented that stocks that appear to have low price to a fundamental (book value, earnings, dividends etc) can outperform stocks with high price to a fundamental measure of value. Lakonishok, Shleifer and Vishny (1994) initiated a productive period that aimed to settle the dispute on the EMH and investigate the causes of such â€Å"anomalies†. Thus, the aforementioned researchers sparked the debate not only on the market efficiency hypothesis but also on what are the sources for these phenomena. Fama and French (1992) supported the idea that certain stocks outperform their counterparts due to the larger risk that the investors bear. Lakonishok, Shleifer and Vishny (1994) supported the idea that investors fail to correctly react to information that is available to them. The same idea was supported by many researchers such as Piotroski (2001). The latter also constructed a score in order to categorise stocks with high B/M that can yield positive abnormal returns (namely, the F Score). Additionally, the â€Å"market efficiency debate â€Å"drove behavioural finance to rise in popularity. The value-growth phenomenon has yielded many articles that aim to find evidence that a profitable strategy is feasible or trace the sources of these profits but, at the same time, the main approach adopted in each study varies significantly. Asness (1997) and Daniel and Titman (1999) examine the price momentum, while Lakonishok, Sougiannis and Chan (2001) examine the impact of the value of intangible assets on security returns. In addition, researchers have found evidence that the value-growth strategies tend to be successful worldwide, as their results suggest. To name a few, Chan, Hamao and Lakonishok (1991) focused on the Japanese market, Put and Veld (1995) based their research on France, Germany and the Netherlands and Gregory, Harris and Michou (2001) examined the UK stock market. It is worth mentioning that solely the evidence of such profitable strategies could be sufficient to draw the attention of practitioners, but academics are additionally interested in exploring the main cause of these arising opportunities as well as the relationship between the aforementioned phenomena (namely, the value growth, post announcement drift and the accrual anomaly). In general, two schools of thought have been developed: the one that supports the risk based explanation or, in other words, that stocks yield higher returns simply because they are riskier, and the one that supports that investors fail to recognise the correct signs included in the available information. 2.3. The accruals anomaly 2.3.1. Introduction of the accrual anomaly. Sloan (1996) documented that firms with high (low) accruals tend to earn negative (positive) returns in the following year. Based on this strategy, a profitable portfolio that has a long position on stocks with low accruals and short position on stocks with high accruals yields approximately 10% abnormal returns. According to Sloan (1996) investors tend to overreact to information on current earnings. Sloan’s (1996) seminar paper has been characterised as a productive work that initiated an interesting to follow debate during the last decade. It is worth noting that even the very recent literature on the accrual anomaly has not reached reconciling conclusion about the main causes of this particular phenomenon and about whether a trading strategy (net of transaction costs) based solely on the mispricing of accruals can be systematically profitable. At this point it is worth mentioning that the accruals have been found to be statistically significant and negative to predict future stock returns. On the other hand, there are papers that examine the accruals and its relations with the aggregate market. A simple example is the paper published by Hirshleifer, Hou and Teoh (2007), who aim to identify the relation of the accruals, if any, with the aggregate stock market. Their findings support that while the operating accruals have been found to be a statistical significant and a negative predictor of the stock returns, the relation with the market portfolio is strong and positive. They support that the sign of the accruals coefficient varies from industry to industry reaching a peek when the High Tech industry is taken into account (1.15), and taking a negative value for the Communication and Beer/Liquor industry. 2.3.2 Evidence for the international presence of the phenomenon. Researchers that investigated the accruals anomaly followed different approaches. At this point, it is worth noting that the evidence shows the accrual anomaly (although it was first found to be present in the US market) to exist worldwide. Leippold and Lohre (2008) examine the accrual anomaly within an international framework. The researchers document that the accrual anomaly is a fact for a plethora of markets. The contribution of the paper though, is the large and â€Å"complete† number of tests used, so that the possibility of pure randomness would be eliminated. Although, similar tests showed that momentum strategies can be profitable, recent methodologies used by the researchers and proposed by Romano and Wolf (2005) and Romano, Shaikh and Wolf (2008), suggest that the accruals anomaly can be partially â€Å"random†. It is noteworthy that the additional tests make the â€Å"anomaly† to fade out for almost all the samples apart from the markets of US, Australia and Denmark. Kaserer and Klingler (2008) examine how the over-reaction of the accrual information is connected with the accounting standards applied. The researchers constructed their sample by solely German firms and their findings document that the anomaly is present in Germany too. We should mention at this point that, interestingly, prior to 2000, that is prior to the adoption of the international accounting standards by Germany, the evidence did not support the existence of the accrual anomaly. However, during 2000-2002, Kaserer and Klingler (2008) found that the market overreacted to accrual information. Hence, the authors support the idea that an additional cause of the anomaly is the lack of legal mechanisms to enforce the preparation of the financial statements according to the international accounting standards which might gave the opportunity to the firms to â€Å"manipulate† their earnings. Another paper that focuses on the worldwide presence of the accruals mispricing is that of Rajgopal and Venkatachalam (2007). Rajgopal and Venkatachalam examined a total of 19 markets and found that the particular market anomaly exists in Australia, UK, Canada and the US. The authors’ primal goal was to identify the key drivers that can distinguish the markets where the anomaly was documented. Their evidence supports the idea that an accrual strategy is favoured in countries where there is a common law tradition, an extensive accrual accounting and a low concentration of firms’ ownership combined with weak shareholders’ rights. LaFond (2005) also considers the existence of the phenomenon within a global framework. The author’s findings support the notion that the accrual anomaly is present worldwide. In addition, LaFond argues that there is not a unique driving factor responsible for the phenomenon across the markets. It is worth noting that LaFond (2005) documented that this particular market imperfection is present in markets with diverse methodology of accrual accounting. Findings are against the idea that the accrual anomaly has any relation with the level of the shareholders protection or a common law tradition, as suggested by Rajgopal and Venkatachalam (2007). Finally, the author suggests that, if any, it is not the different method of accrual accounting (measurement issues) that favours or eliminates the accrual anomaly, but the accrual accounting itself. 2.3.3. Further Evidence for the roots of the accruals anomaly. Additionally, papers such as those of Thomas and Zang (2002) or Hribar (2000) decompose accruals into changes in different items (such as inventory, accounts payable etc). The findings catholically suggest that extreme changes in inventory affect returns the most. On the other hand, many articles connect the accruals with information used by investors, such as the behaviour of insiders or analysts, as the latter can be considered a major signal to the investors for a potential manipulation of the firms’ figures. In particular, Beneish and Vargus (2002) documented that firms with high accruals and significant insider selling have substantial negative returns.  Bradshaw (2001) and Barth and Hutton (2001) examine the analysts’ reports and their relation with the accruals anomaly. Their findings support that the analysts’ forecasting error tends to be larger for firms with high accruals, while analysts do not revise their forecasts when new information for accruals is available. Gu and Jain (2006) decompose accruals into changes in inventory, changes in accounts receivable and payable and depreciation expenses and try to identify the impact of the individual components to the anomaly. Consistent with Sloan (1996), Gu and Jain (2006) document that the accrual anomaly exists at the components level. The findings are important since Desai et al (2004) supported the connection of the accrual anomaly with a single variable (cash flows from operations). The researchers suggest that the results yielded by Desai et al (2004) were highly dependent on the methodology used and thus, suggested that the accruals anomaly is â€Å"alive and well†. Moreover, other articles try to confirm whether the anomaly is mainly caused by the wrong interpretation of the information contained in accruals. Ali et al. (2000), investigate whether the naà ¯ve investors’ hypothesis holds. Following the methodology introduced by Hand (1990) and Walther (1997), they found that for smaller firms, which are more likely to be followed by sophisticated investors, the relation between accruals and negative future returns is weaker compared to larger firms, which are followed by many analysts. Therefore, the researchers suggest that, if anything, the naà ¯ve investors’ hypothesis does not hold. In contrast to other market anomalies where findings suggest that the naà ¯ve investors hypothesis holds, the accruals anomaly is suggested as unique. Shi and Zhang (2007) investigate the earnings fixation hypothesis suggesting that the accruals anomaly is based on the investors â€Å"fixation† or â€Å"obsession† on earnings. Their primal hypothesis is that if investors are highly based on the reports about earnings and misprice the value-relevant earnings, then the returns should be dependent not only on the accruals but also on how the stock’s price changes according to reported earnings.  The researchers’ hypothesis is confirmed and finding support that an accrual strategy for firms whose stocks’ price highly fluctuates according to earnings yields a 37% annual return. Sawicki and Shrestha (2009) aim to examine two possible explanations for the accruals anomaly. Sloan (1996) proposed the fixation theory under which investors fixate on earnings and thus overvalue or undervalue information for accruals. Kothari et al. (2006) proposed the â€Å"agency theory of overvalued equity† according to which managers of overvalued firms try to prolong the period of this overvaluation which causes accruals to increase.  The paper uses the insider trading and other firm characteristics and tries to compare and contrast the two major explanations. Evidence produces bd Sawicki and Shrestha (2009) support the Kothari et al. (2006) explanation for the accrual anomaly. In a relatively different in motif paper, Wu and Zhang (2008) examine the role that the discount rates play in the accrual anomaly. They argue that if anything, the anomaly is not caused by irrationality from the investors’side but by the rationality of firms as it is proposed by the q-theory of investment. They argue that since the discount rates fall and more projects become profitable (which makes accruals to increase) future stock returns should decline. In other words, if the capital investment correctly adjusts to the current discount rates, the accruals should be negatively correlated with the future returns and positively correlated with the current returns. The evidence of Wu and Zhang (2008) support that the accruals are negatively correlated with the future stock returns but the contribution of the paper is in that they document that current stock returns are positively correlated with the accruals. 2.3.4. The relation of the accrual anomaly with other market imperfections. Many papers examine the relation between the accruals anomaly and other well-known anomalies such as the post announcement drift or the value-growth phenomenon. Desai et al. (2002), suggest that the â€Å"value-growth† anomaly and the accruals anomaly basically interact and conclude that the  ¨accruals strategy and the C/P reflect the same underlying phenomena†. Collins and Hribar (2000) suggest that there in no link between the accruals anomaly and the â€Å"PAD†, while Fairfield et al. (2001) support that the accruals anomaly is a sub-category of an anomaly caused by the mistaken interpretation of the information about growth by the investors. Cheng and Thomas (2006) examine the claim that the accrual anomaly is a part of a broader anomaly (and more specifically, the value-glamour anomaly). Prior literature suggested that the operating cash flows to price ratio subordinates accruals in explaining future stock returns (Deshai et al (2004)). Their evidence suggests that the Operating CF to price ratio does not subsume neither abnormal nor total accruals in future announcement returns. This particular result does not confirm the claim that the accrual anomaly is a part of a broad value-glamour anomaly. Atwood and Xie (2005) focus on the relation of the accrual anomaly and the mispricing of the special items first documented by Burgstahler, Jiambalvo and Shevlin (2002). Their hypothesis that the two phenomena are highly related is confirmed since the researchers found that special items and accruals are positively correlated. Additionally, further tests yielded results that suggest that the two imperfections are not distinct, while the special items have an impact on how the market misprices the accruals. Louis and Sun (2008) aim to assess the relation between the abnormal accrual anomaly and the post earnings announcement drift anomaly. The authors hypothesize that both anomalies are related to the management of the earnings and thus, they aim to find whether the two are closely connected. The findings are consistent with the primal hypothesis, as they found that â€Å"firms with large positive change of earnings that were least likely to have manipulated earning downwards† did not suffer from PEAD, while the same result was yielded for firms that had large negative change of earnings that were least likely to have managed their earnings upwards. As supported by many researchers the value-growth anomaly and accruals anomaly might be closely related or they might even be caused by the similar or even identical roots.  Fama and French (1996) support that the book to market factor captures the risk of default, while Khan (2008) suggests that in a similar pattern firms with low accruals have a larger possibility to bankrupt. Therefore, many researchers try to connect the two phenomena or to answer whether a strategy based on the accruals can offer more than what the value growth strategy offers. Hardouvelis, Papanastopoulos, Thomakos and Wang (2009) connect the two anomalies by assessing the profitability of interacting portfolios based on the accruals and value-growth measures. Their findings support that positive returns are obtainable and magnified when a long position is held for a portfolio with low accruals while combined with stocks that are characterised as high market to book. The difference of a risked-based explanation or an imperfection of the markets is considered to be a major debate, as it can challenge the market efficiency hypothesis. Many researchers, such as Fama and French (1996) noted that any potential profitable strategy is simply due to the higher risk that the investors have to bear by holding such portfolios. In a similar way, the profitable accruals strategies are considered as a compensation for a higher risk. Stocks that yield larger returns are compared or labelled as stocks of firms that are close to a financial distress. Khan (2000) aims to confirm or reject the risk-based explanation of the accruals anomaly. The researcher uses the ICAPM in order to test if the risk captured by the model can explain the anomaly first documented by Sloan (1996). It is worth noting that the descriptive statistics results for the sample used showed that firms that had low accruals also had high bankruptcy risk.  The contribution of the paper is that, by proposing a four factor model enhanced by recent asset pricing advances, it showed that a great portion of the mispricing that results in the accrual anomaly can be explained within a risk-based framework. Furthermore, Jeffrey Ng (2005) examines the risk based explanation for the accrual anomaly which proposes that accruals include information for financial distress. As proposed by many papers, the accrual anomaly is simply based on the fact that investors bare more risk and thus low accrual firms have positive abnormal returns. The researcher tries to examine how and if the abnormal returns of a portfolio which is short on low accruals stocks and long on high accrual firms changes when controlling for distress risk. Evidence supports that at least a part of the abnormal returns are a compensation for bearing additional risk. Finally, the results support that the big portion of the high abnormal returns of the accrual strategy used in the particular paper is due to stocks that have high distress risk. 2.3.5. The accruals anomaly and its relation with firms’ characteristics. A noteworthy part of the academic literature examines the existence of some key characteristics or drivers that are highly correlated with the accruals anomaly. Many researchers have published papers that aim to identify the impact of firm characteristics such as the size of the firm, characteristics that belong to the broader environment of the firms such as the accounting standards or the power of the minority shareholders. Zhang (2007) investigates whether the accrual anomaly varies cross-sectionally while being related with firms’ specific characteristics. The researcher primarily aims to explain which the main reason for the accrual anomaly is. As Zhang (2007) mentions, Sloan (1996) attributes the accrual anomaly to the overestimation of the persistence of accruals by investors, while Fairfield (2003) argues that the accrual anomaly is a â€Å"special case of a wider anomaly based on growth†. The evidence supports the researcher’s hypothesis that characteristics such as the covariance of the employee growth with the accruals have an impact on the future stock returns. Finally, Zhang (2007) documents that that accruals co-vary with investment in fixed assets and external financing. Louis, Robinson and Sbaraglia (2006) examine whether the non-disclosure of accruals information can have an impact on the accruals anomaly. The researchers, dividing their sample into firms that disclose accruals information on the earnings announcement and firms that do not, investigate whether there exists accruals’ mispricing. The evidence supports that for firms that disclose accruals information, the market manages to correctly understand the discretionary part of the change of the earnings. On the contrary, firms that do not disclose accruals information are found to experience â€Å"a correction† on their stock price. Chambers and Payne’s (2008) primal aim is to examine the relation of the accrual anomaly and the auditing quality. The researchers’ hypothesis is that the accruals mispricing is related with the quality of auditing.  Additionally, their findings support that the stock prices do not reflect the accruals persistence characterising the lower-quality audit firms. Finally, their empirical work finds that the returns are greater for the lower-quality audit portfolio of firms. Palmon, Sudit and Yezegel (2008) examine the relation of the accruals mispricing and the company size. Evidence shows that company size affects the returns and, as the researchers documented, the negative abnormal returns are mostly due to larger firms while the positive abnormal returns come from the relatively small firms. Particularly, as the strategy with the highest profits they found the one that had a short position in the largest-top-accrual decile and a long position in the smallest-low-accrual decile. Bjojraj, Sengupta and Zhang (2009) examine the introduction of the Sarbanes-Oxley Act and the FAS 146 and how these two changes affected the accrual anomaly. FAS 146 (liabilities are recognized only when they are incurred) reduced the company’s ability to â€Å"manipulate† earnings while the SOX aims to enhance the credibility of the financial statements. The evidence recognises a change on how the market conceives information about restructurings charges. The authors propose that a possible explanation is that before the introduction of SOX and the FAS 146, the market was reluctant due to the ability of the firms to manage earnings. Finally, Bjojraj, Sengupta and Zhang (2009) document that post to the FAS 146 and the SOX act, low accrual portfolios do not generate positive abnormal returns. 2.4. The applications of the accruals phenomenon and reasons why it is not arbitraged away. The importance of the analysis of the anomalies is substantial for two reasons. Firstly, the profitability of a costless strategy challenges the EMH, especially if the strategy is based on bearing no additional risk. Secondly, managers’ incentives to manipulate the financial statements and consequently the accruals would be obvious if a profitable strategy based on such widely available information existed. Chen and Cheng (2002) find that the managers’ incentive to record abnormal accruals is highly correlated with the accrual anomaly. The hypothesis of the researchers, which their findings support, was that the investors fail to detect when the managers aim to record abnormal accruals and that may contribute to the accruals anomaly. Richardson’s (2000) main objective is to examine whether the information contained in the accruals is utilized by short sellers. As the researcher mentions, previous articles such as that of Teoh and Wong (1999) found that sell side analysts were unable to correctly â€Å"exploit† the information contained in accruals for future returns. Richardson suggests that short sellers are considered as sophisticated enough to utilize the accruals information. Findings confirm previous work, such as that of Sloan (2000), who suggests that even short sellers do not correctly utilize the information contained into accruals. Ali, Chen, Yao and Yu (2007) examine whether and how equity funds benefit from the accrual anomaly by taking long position into low accruals firms. The researchers aim to identify how exposed are the equity firms to such a well known anomaly and what characteristics these funds share. By constructing a measure called â€Å"accruals investing measure† (AIM), they try to document the portion of the low accruals firms into the actively managed funds. The evidence shows that generally funds are not widely exposed to low accruals firms but when they do so, they have an average of 2.83% annual return. It is worth noting that the annual return is net of transaction costs. Finally, the side-effects of high volatility in returns and in fund flows of the equity funds that are partially based on the accrual anomaly might be the reason behind the reluctance of the managers. Soares and Stark (2009) used UK firms to test whether a profitable accrual strategy is feasible net of transactions costs. Their findings support that indeed the accrual anomaly is present in the UK market. The authors suggest that for such a strategy to be profitable, someone is required to trade on firms with small market capitalization. They also suggest that although the accruals’ mispricing seems to exist also in the UK, the transaction costs limit the profits to such an extent that the accrual anomaly could be difficult characterised as a challenge to the semi strong form of the efficient market hypothesis. Finally, we should not neglect to mention two papers that discourse on why the markets do not simply correct the accruals anomaly. According to the classical theory, markets are so imperfect that can produce the incentive to the market to correct the â€Å"anomalies† at any point of time. Mashruwala, Rajgopal and Shevlin (2006) examined the transactions costs and the idiosyncratic risk as possible reasons of why the accrual anomaly is not arbitraged away. The researchers aimed to investigate why the market does not correct the anomaly, but also to identify whether the low accruals firms are riskier. The paper poses the question of what stops the informed investors from taking long positions into profitable stocks according to the accrual anomaly so that they can arbitrage it away. The paper examines the practical difficulty of finding substitutes so that the risk can be minimized and its relation with the accrual anomaly. Additionally, the paper investigates the transaction co sts and findings support that according to the accrual anomaly, the profitable stocks tend to be the ones with low stock prices and low trading volume. Lev and Nissim (2004) focus on the persistence of the accr

Thursday, September 19, 2019

Impact of Childhood Attachment and Separation Experiences upon Adult Re

Impact of Childhood Attachment and Separation Experiences upon Adult Relationships Abstract This qualitative research was conducted to ascertain if the attachment style a person has as an adult is created or influenced by his/her interactions with early childhood experiences. The research was carried out by means of a thematic analysis of an interview of a married middle-aged couple. The interviews bought the themes of Work, Childhood and Relationships to the foreground and these were analysed to establish if there is a connection in our childhood attachments and those we make as adults. It can be seen that there are similarities to the attachment types of infants compared to those that emerge as adults although individual differences and life experiences also have a part to play in our capacity to form secure adult attachment relationships. Introduction The general principle behind attachment theory is to describe and explain people’s stable patterns of relationships from birth to death. Because attachment is thought to have an evolutionary basis, these social relationships are formed in order to encourage social and cognitive development, and enable the child to grow up to ‘become socially confident’ in adulthood. The assumption in attachment research on children is that sensitive responses by the parents to the child’s needs result in a child who demonstrates secure attachment while lack of sensitive responding results in insecure attachment. John Bowlby who attempted to understand the distress infants experience during separation from their parents originally developed this research. Bowlby saw attachment as being crucial to a child’s personality developing and to the development of relationships with others later in life. This theory has its foundation in vertical relationships i.e. Primary Care Giver/Child, while on the other hand in The Nurture Assumption, Judith Rich Harris (1999) suggests that it is the peer groups that have the strongest control in shaping how that child will grow up and that parents have very little influence over the matter, this is known as a horizontal relationship. In developing and classifying infant behaviour Mary Ainsworth who worked with Bowlby for a number of years developed a method of gauging attachment in infants, in an experiment known as the ‘Strange Situation’. This involved observations in la... ...ng to see Jo smile and raise her eyebrows when Tony says at the beginning of the first interview he is â€Å"Fairly easy going†. It led me as a researcher to think that perhaps this was not actually the case, in Jo’s opinion. Actions like this give the interview a complete different angle, and can add tremendous information to the final interpretation of what is said. References Wood C, Littleton K & Oates J, Lifespan development, Chapter 1 in Challenging Psychological Issues by Cooper T and Roth I (eds) The Open University, Milton Keynes, 2002. Ainsworth, M.S., Blehar, M.C., Waters, E. and Wall, S. (1978) Patters of Attachment: A Psychological Study of the Strange Situation, Hillsdale, NJ, Erlbaum Goodley D, Lawthom R, Tindall C, Tobbell J, Wetherell M, (eds) (2003) Methods Booklet 4 – Understanding People: Qualitative Methods. Open University Press. Banister P, (ed) (2003) Methods Booklet 5 – Qualitative Project. Open University Press. Harris, J.R. (1999) The Nurture Assumption, London Bloomsbury. Research Methods in Psychology DSE 212 Video 1 –Part 4: Interviewing, Milton Keynes, The Open University. Appendix Appendix A -  Ã‚  Ã‚  Ã‚  Ã‚  Annotated copy of transcript.

Wednesday, September 18, 2019

History Other :: essays research papers fc

Mikey Ritualistic Sacrifice in Ancient Greek Mythology The ritual of sacrifice in Greek literature played a prominent role in societal influence, defining many aspects of their culture. Sacrifice was the foundation of moral concern, as well as an effective means of narrative development in Greek tragedy. The thematic reoccurrence of sacrifice in Greek literature reveals its symbolic importance. At a time when politics and religion were one in the same, sacrifice was crucial in regulating governmental issues. Tragedies manipulate rituals in order to portray a community’s current sense of order or disorder. The pattern of sacrifice typically entails conflict between the needs of an individual and those of a community in crisis, ultimately resolved in favor of the community through willing participation of the sacrificial victim (Easterling 188). Rites of sacrifice serve to rectify corrupted relations, and maintain moral balance. The social order of Greek life is constructed, by sacrifice, through irrevocable acts; religion and political existence were thoroughly integrated forcing all other life functions to reflect this foundation. In Greek literature, the role of sacrifice served many functions. The literal meaning of sacrifice, in most instances, juxtaposes the consequences of its perpetrations, ultimately establishing beneficial results. Most importantly, sacrifice was the basis of the relations maintained between men and gods, establishing a means of contact and interaction. Additionally, the practice of ritual sacrifice helped to classify the gods, and differentiate them from one another: double aspects of a single deity, hierarchical relations between two dietes, or the outstanding nature of one particular deity. And finally, sacrifice functions directly to clarify the political rights of each individual and reveal the structures of their social body (Sissa and Marcel). However, various implementations of sacrifice can possibly induce different res ults depending on the direction of the interaction. For example, sacrifice can take place between a god and animals, humans, or another god thus revealing rites both of, and to mythological gods. Mortals made sacrifices at any time, to any god during the occurrence of something that fell with that deity’s’ jurisdiction, or as a payment of a vow (Sissa and Marcel). Rites of sacrifice were also the focus of many cultural festivals in which additional purposes were combined, such as rites of initiation, purification, fire, blood and oath. These rites presented themselves in all facets of Greek culture, producing ritualistic transfers of virtue, possessions, and power seeking to redress past injustices or to return existence to the status quo.