Thursday, October 31, 2019

Cardiac Care Essay Example | Topics and Well Written Essays - 500 words

Cardiac Care - Essay Example d these are: 1) to restart the heart as quickly as possible, and to start cooling as early as possible, and 2) transport patients to a single-specialized post-resuscitation facility in hopes of preserving their brains. It was known typically that in EMS system, cooling begins after return of spontaneous circulation or ROSC. However, it was recently discovered that patients are cooled as resuscitation attempts occur, and receive a broad complement of additional therapies and support both pre-hospitalization and in hospital as studied by Virginia Commonwealth University (VCU) Medical Center. Chairman of VCUs Department of Emergency Medicine and medical director of the Richmond Ambulance Authority, Joseph Oranto, MD, had explained the motivation with such approach. He said â€Å""Our approach was to do something a bit different.† He had explained the need for early cooling as part of EMS based on his conducted study. He had stated, "The basis is animal data that pretty consistently suggests that the earlier you initiate cooling, particularly during the resuscitation process, the more likely you are to get a good neurologic outcome." Ornate pointed out even though the early studies to prove that the spontaneous circulation had improved the chance of survival had been failed, he is taking chances with the EMS approach of cooling early will be absolutely proven to be beneficial for the patients. As the time the article was written, they are still at the early stage of application of cooling early after ROSC. 1) Good quality CPR that includes automated chest compressions and interposed ventilations. This is being performed for 2 to 3 minutes before the rhythm is determined and should not be stopped during defibrillation. 3) Cooling is achieved with 4 °C saline when drugs had restarted the heart. This is a treatment in which vasopressin and epinephrine are being alternately used. Drugs are given IO whenever IV is not able to achieve in its initial pass. During the

Tuesday, October 29, 2019

Research paper summary Carrying capacity Example | Topics and Well Written Essays - 500 words

Summary Carrying capacity - Research Paper Example The methodology employed is two pronged as mentioned. The laboratory study was first to be carried out. The study examined various components of the Maculinea arion like its feeding preferences, autumn food consumption and growth and the winter period weight loss. The sample size included a total of 69 caterpillars reared from the onset of their final instars in the late summer to the month of May. In addition, each caterpillar was kept in a seven cubic centimeter plastic box that was furnished with a small piece of moist sponge. During that period, fresh food was supplied to the specimens after every 2-3 days to ensure that there was a surplus in the food given to them. It is also important to note that caterpillars were categorized into different sets with each set provided with a different food type and composition to enable the accurate determination of the objectives of this study. Each caterpillar was also weighed weekly to and kept in a clean box with the resultant weight being recorded for each caterpillar. In addition to the variables obtained in the laboratory set up, published materials were also used for the purposes of the number and biomass of the immature stages of M. sabuleti that were available to final instar caterpillars of Maculinea orion. Finally, estimates were carried out from the mean body weight that was obtained 2 days before eclosion. The field study involved the examination of mortality rates of wild Maculinea arion adopted at different densities into Myrmica sabuleti nest. This was measured during a five-year period study on site X. The study involved the introduction of the caterpillars at different stages of the duration with excavations and counting being done. The obtained data was subsequently being recorded to monitor the changes and effects of the different seasons, eating habits and species behavior. As a combination of the two methods and integration

Sunday, October 27, 2019

History of Cocaine Use Medical and Recreational Uses

History of Cocaine Use Medical and Recreational Uses Cocaine through the ages: from elixir to poison. Abstract: Cocaine, a plant alkaloid derived from coca leaves is a potent stimulant of CNS and has local anesthetic action as well. Historically, it was ingested in the form of chewing coca leaves, to suppress hunger and fatigue. With discovery of its local anesthetic properties, cocaine was introduced into world of medicine and a local anesthetic, but over last few decades, gained popularity as drug of abuse. Cocaine carries with it great potential for addiction and abuse. It is administered through various routes, smoking free-base crack and intranasal inhalation being most popular. Its primarily metabolized in liver and distributed to all body tissues. Due to lipid nature it tends to concentrate in brain and adipose tissues with chronic administration. Its mainly eliminated through kidneys, but saliva and stools are also routes of excretion. A number of health hazards have been shown to be associated with cocaine use including, cardiac abnormalities, psychological disturbances, addiction pot ential and renal failure with or without rhabdomyolysis. Acute and chronic cocaine toxicities with sufficient collected data are included. Techniques for detecting cocaine in blood including enzyme linked immunoassay and POCT (Point of care screening tests) have also been discussed. An analysis of recent trends in cocaine uses have been studied and presented along with graphical illustrations of epidemiological evidence to support the data. Introduction and objective: Objective: to display how cocaine has evolved through time in its uses and available forms, from simple coca leaf chewing custom of South Americans in 2500 BC to modern forms of freebase-coke among others as one of the most commonly abused toxic drug. Methodology: Data was mainly collected from electronic resources, but text on immunology and pharmacology was also consulted. From electronic sources, I mainly used search engines using a number of keywords including ; history of cocaine, crack, pharmacokinetics of cocaine, mode of action , coca leaf, acute cocaine poisoning, chronic cocaine toxicity, Karl Koller, Sigmund Freud, Immunoassay, etc. I also went through a number of journals available online, and a number of researches conducted which related to cocaine. My aim was to find changes in cocaine use from its discovery to date, and show, with help of collected data, that it has moved in a negative direction. Brief history: Cocaine, use of which, according to some sources, date back to at least 1200 years, has now, rightfully, earned itself a place in drugs of abuse list among others like Caffeine, Nicotine, Amphetamine, etc. To date, cocaines uses have evolved from gaining popularity as topical anesthetic agent, and as component of energizing drinks to becoming one of the most abused drugs in the world. It is a powerfully addictive stimulant drug, which acts by interfering with cerebral and peripheral synaptic transmission among neurons. Mode of action has been described in greater detail later in pharmacodynamics section, but for brief introduction, it interferes with reuptake of, and thereby enhance duration of action of, monoamines, dopamine, serotonin and nor epinephrine Brain PF et.al (1989). It also produces membrane stabilizing effect, more commonly referred to as local anesthetic effect. Latter is achieved through modulation of voltage gated sodium channels and consequent blockade of sensory im pulses conduction from that part of the neuron to central nervous system. Brain PF et.al (1989) Earliest records of cocaine use reveal it to be a part of South American custom of chewing coca leaves. This use is believed to date back to 2500BC. Steven Cohen (1981) Practice of chewing mixture of tobacco and coca leaves was defined by Nicolà ¡s Monardes, in 1569, to induce â€Å"great contentment†. Cocaine is the active component of coca leaves, which also contains nicotine. Karch SB (1998). In 1859, Italian doctor, Paulo Montegazza, after witnessing coca use by natives of Peru, and getting mesmerized by it, decided to study the effects of cocaine on himself. After his studies he concluded his findings into a paper in which he declared cocaine to be medically useful in treating furred tongue in the morning, flatulence and whitening the teeth. Steven R. King (1992). In 1863, French chemist, Angelo Mariani, introduced popular cocavine, Vin Mariani. Vin Mariani wasproduced from mixture of 6 mg coca leaves per fluid ounce of Bordeaux wine. Courtwright DT (2001) Angelo Mariani, creator of Vin Mariani, which later became the hallmark of cocavines was honored with Vatican gold medal by Pope Leo XIII for this achievement. Ethanol, a component of vin mariani, is believed to extract cocaine from coca leaves. In 1884, the concept of cocavine was adopted by John S. Pemberton, with introduction of Pembertons French Wine Coca. After prohibitions imposed on cocaine use and manufacture of cocaine-containing products including cocavine in 1885, Pemberton introduced carbonated, non-alcoholic form of Vin Mariani and called it Coca-cola. Richard Ashley (1975). From 1906 onwards, however, after Pure Food and Drug act was passed, decocainised forms of coca were used for manufacture of coca-cola. In 1884, Austrian physician Sigmund Freud, recommended cocaine for treatment of morphine and alcohol addiction. A strategy that was later employed in 1879 when cocaine was used to treat morphine addiction. Steven Cohen (1981). In his published word, ÃÅ"ber Coca, Sigmund defined effects of cocaine in following words: â€Å"exhilaration and lasting euphoria, which in no way differs from the normal euphoria of the healthy personYou perceive an increase of self-control and possess more vitality and capacity for work.In other words, you are simply normal, and it is soon hard to believe you are under the influence of any drug.Long intensive physical work is performed without any fatigueThis result is enjoyed without any of the unpleasant after-effects that follow exhilaration brought about by alcohol.Absolutely no craving for the further use of cocaine appears after the first, or even after repeated taking of the drug† In 1985, use of cocaine for induction of spinal anesthesia was accidentally discovered by American neurologist Leonard Corning while he studying the effects of cocaine on spinal nerves in a dog and accidentally pierced the dura matter. Corning JL (1885) Cocaine was, however, not used as anesthetic in spinal surgery until 1989 when first planned cocaine induced spinal anesthesia was administered in a surgery, by August Bier. A. Bier, (1899) Coca leaves have traditionally been used as suppressants for fatigue, thirst, and hunger. Its use has now been limited to Andean countries, where coca leaf chewing and coca tea consumption are still practiced. Industrially, coca leaves serve as source of drug cocaine, and in some cosmetic and food industries, including coca cola. Richard Ashley (1975) From 1980s to date, cocaine has gained popularity as drug of abuse, and has widely replaced heroin and other narcotics as drug of abuse, being used in different forms and administered via various routes. Richard Ashley (1975) Discovery: Discovery of cocaine, as local anesthetic, is claim to fame for Austrian ophthalmologist, Karl Koller. Kollers name is credited with demonstration of anesthetic effect of cocaine, in 1884. Karl Koller was a close associate of Sigmund Freud who in same year recommended cocaine to be employed in treatment for morphine and alcohol addiction. Hruby K (1986). Koller studied effects of cocaine on eye by applying the drug to his own eye and later pricking it with pins. He presented his findings to the Heidelberg Ophthalmological Society in same year. Hruby K (1986) After successfully experimenting on himself, Koller used cocaine as local anesthetic in eye surgeries, a use that continues to this day. Cocaine was later employed in other fields including dentistry for induction of local anesthesia, Today, however, cocaine has largely been replaced by other local anesthetic agents like lidocaine, xylocaine, bupivacaine, etc, which produce local anesthetic effect as efficiently and do not carry potential for abuse.Hruby K (1986) Isolation: Friedrich Gaedcke, aGerman chemist, was first person to successfully isolate cocaine from coca leaves, in 1855. An improved isolation process was, however, developed by Albert Niemann, who was enrolled as a Ph.D. student at a German university, University of Gottingen , in 1859. Niemann wrote a dissertation describing steps of isolation which was published in 1860 and was entitled, â€Å" ÃÅ"ber eine neue organische Base in den Cocablà ¤ttern† (On a New Organic Base in the Coca Leaves). F. Gaedcke (1855) Formal Chemical Name (IUPAC) for cocaine: (1R,5S)-methyl 8-methyl-3-(phenylcarbonyloxy)-8-azabicyclo[3.2.1]octane-2-carboxylate. Chemical structure of cocaine: Structure of cocaine molecule was first defined by Richard Willstà ¤tter in 1898. Medicalisation and popularization: Ever since its discovery, cocaines medical uses were quickly exploited through research and experimentation. Spanish physicians described first medical uses of cocaine as early as 1596, but the use of cocaine did not become more widespread until 1859, when Albert Niemann isolated the drug from coca leaves. Soon after it was isolated, cocaine was used to try to cure almost all the illnesses and maladies that were known to man. (Albert Niemann 1860) 1859s Montagezzis discovery about cocaine being useful in treating furred tongue in the morning, flatulence and whitening the teeth, was one of the earliest recorded studies that signified possible medical importance of cocaine. In 1879, Vassili von Anrep, of the University of Wà ¼rzburg, demonstrated analgesic properties of cocaine in an experiment that he conducted on a frog. He prepared two separate jars, one containing cocaine-salt solution, other containing salt water serving as control. One of frogs legs was submerged in cocaine solution and other in control followed by stimulation of leg in different ways. Reactions in two legs varied considerably. In the same year, cocaine began to be used in treatment of morphine addiction. The commercial production of purified cocaine gained momentum only in the mid-1880s. Its greatest medical value was in ophthalmology. Eye-surgery stood in desperate need of a good local anesthetic. This was because in eye operations it is often essential for a conscious patient to move his eye as directed by the surgeon without flinching. Karl Kollers demonstration of anesthetic properties of cocaine in 1884 was an important breakthrough establishing cocaines importance, medically when it was introduced in Germany as local anesthetic for eye surgery. (Altman Aj et.al 1985) Kollers discovery was later followed in 1985 by Leonard Cornings accidental demonstration of cocaines use in induction of spinal anesthesia, which became formally employed in spinal surgery in 1989 when first planned cocaine induced spinal anesthesia was administered by August Bier. Medical use of cocaine has largely been restricted to induction of local anesthesia. Even as local anesthetic agent, discovery of hazardous effects of cocaine use led to early development of safer alternative drugs like lidocaine, etc. One of its first non medical uses of cocaine was in military. In 1883 Theodor Aschenbrandt administered cocaine to members of the Bavarian army. It was found that the drug enhanced their endurance on maneuver. His positive findings were published in a German medical journal, which brought the effects of this wonder drug to a wider medical audience, including Sigmund Freud. Following is taken from â€Å"On cocaine† by Sigmund Freud. â€Å"A few minutes after taking cocaine, one experiences a certain exhilaration and feeling of lightness. One feels a certain furriness on the lips and palate, followed by a feeling of warmth in the same areas; if one now drinks cold water, it feels warm on the lips and cold in the throat. One other occasions the predominant feeling is a rather pleasant coolness in the mouth and throat. During this first trial I experienced a short period of toxic effects, which did not recur in subsequent experiments. Breathing became slower and deeper and I felt tired and sleepy; I yawned frequently and felt somewhat dull. After a few minutes the actual cocaine euphoria began, introduced by repeated cooling eructation. Immediately after taking the cocaine I noticed a slight slackening of the pulse and later a moderate increase. I have observed the same physical signs of the effect of cocaine in others, mostly people my own age. The most constant symptom proved to be the repeated cooling eructation. This is often accompanied by a rumbling which must originate from high up in the intestine; two of the people I observed, who said they were able to recognize movements in their stomachs, declared emphatically that they had repeatedly detected such movements. Often, at the outset of the cocaine effect, the subjects alleged that they experienced an intense feeling of heat in the head. I noticed this in myself as well in the course of some later experiments, but on other occasions it was absent. In only two cases did coca give rise to dizziness. On the whole the toxic effects of coca are of short duration, and much less intense than those produced by effective doses of quinine or salicylate of soda; they seem to become even weaker after repeated use of cocaine.† Cocaine was sold as over the counter drug until 1916. It was widely used in tonics, toothache cures, patent medicines, and chocolate cocaine tablets. Prospective buyers were advised (in the words of pharmaceutical firm Parke-Davis) that cocaine could make the coward brave, the silent eloquent, and render the sufferer insensitive to pain. Cocaine was a popular ingredient in wines, notably Vin Mariani. Coca wine received endorsement from prime-ministers, royalty and even the Pope. The Vatican gold medal that Angelo Mariani received for it will forever signify the popularity of cocaine through that period of time. By the late Victorian, era use of cocaine had appeared as a vice in literature, for instance, Arthur Conan Doyles fictional Sherlock Holmes. Number of admissions to drug treated programme in each year is plotted against time for both cocaine and heroin. Graph clearly displays the shift in trend from use of heroin towards cocaine. A combination gaining popularity is speedball, which is formulated by mixing heroin with cocaine. From 1980s to date, cocaine has gained popularity as drug of abuse, being used in different forms and administered via various routes, as evident by figure above which displays the escalation in crack / cocaine usage with concomitant reduction in heroin use. Prohibition: In first part of the twentieth century, with addictive properties of cocaine becoming more apparent with studies, cocaine found itself legally prohibited. Harrison Narcotics Tax Act (1914) outlawed unauthorized sales and distribution of cocaine incorrectly classifying it as a narcotic. In United Nations 1961 Single Convention on Narcotic Drugs, cocaine was listed as Schedule I drug, thereby making its manufacture, distribution, import, export, trade, use and possession illegal unless sanctioned by the state. In 1970s controlled substances act, cocaine was listed as a Schedule II drug in United States. It carries high abuse potential but also serves medicinal purpose. It is a class A drug in the United Kingdom, and a List 1 drug of Opium law in the Netherlands. Modern Usage: In late 90s and early 2000s, crack became very popular among Americans and in past few years has also taken its toll on UK. According to an estimate, U.S cocaine market exceeded $ 70 billion in year 2005, demonstrating the popularity of this menace. News reports are flooded with celebrity arrests on charge of cocaine posession or use. A section on recent facts and figures related to cocaine discusses the modern trends in greater detail later. Addiction potential: Along with amphetamine, cocaine is one of the most widely abused drugs in the world. Powerful stimulant properties of cocaine are beyond doubt. By inhibiting neuronal reuptake of excitatory neurotransmitters, dopamine, serotonin and norepinephrine, cocaine enhances synaptic concentrations of these neurotransmitters in specific brain areas; nucleus accumbens and amygdala which are referred to as the reward center of brain. During 1980s, cocaine widely replaced heroin as drug of abuse, due to its euphoric properties, wide availability and low cost. Different forms and Routes of administration of cocaine: Smoking: Crack, freebase or smokable form of cocaine, was produced and became popular drug of abuse in 1980s. Earliest reports of crack use indicate an epidemic in Bahamas from 1980. By 1985, crack gained popular ranking among drug users across America.Crack is produced by mixing 2 parts cocaine hydrochloride with one part baking soda (sodium bicarbonate). It differs from cocaine hydrochloride in being more volatile, a property that makes it better suited for inhalation administration (smoking) than cocaine hydrochloride. Smoking freebase cocaine releases methylecgonidine, an effect not achieved with insufflation or injection (described later), thereby making it a specific test marker for freebase cocaine smokers. Studies suggest that methylecgonidine is more harmful to heart, liver and lungs than other byproducts of cocaine. Inhalation leads to rapid absorption of cocaine into bloodstream via lungs, reaching brain within five seconds of ingestion. Following rush exceeds snorting in intensity but does not last long. Oral: Ancient tradition of South Americans to chew coca leaves in same manner is tobacco, is another method of cocaine consumption. Alternatively, coca leaves may be consumed like tea by mixing with liquid. Coca leaf consumers have raised a controversy over whether it should be abandoned or not. Rationale behind this controversy is that strong acid in our stomach hydrolyzes cocaine, attenuating its effects on brain; therefore, unless it is taken with an alkaline substance, such as lime, which neutralizes stomachs acid, cocaine intake should not be criminalized. Cocaine is also used as oral anesthetic, both medically and unofficially. Cocaine powder is applied to gums to numb the gingiva and teeth. Colloquial terms for this route of administration are; numbies, gummies and cocoa puffs. Another method for oral administration, commonly known as snow bomb, is to pack cocaine in rolled up paper and swallowing it. Insufflation: Colloquial terms for which are; snorting, sniffing, or blowing is believed to be most commonly employed method of cocaine ingestion in west. Cocaine is poured on a flat, hard surface and divided into fine powder before being insufflated in â€Å"bumps†, â€Å"lines†, or â€Å"rails†. Devices used as aid in insufflation are known as â€Å"tooters†. Anything small and hollow, such as straws cut short, can serve as a tooter. Injection: This achieves the greatest bioavailability, 100%, in shortest span of time, since drug is directly administered into bloodstream saving time and reduced bioavailability that occurs with drug absorption from site of drug administration into bloodstream. Resultant rush is intense and rapid. Risk of contracting blood-borne infections is greatest. â€Å"Speedball†, a mixture of cocaine with heroin used intravenously is a popular and dangerous method of cocaine ingestion. It claims credit for many deaths, including celebrities like John Belushi, Chris Farley ,Mitch Hedberg, River Phoenix and Layne Staley. ADME Pharmacokinetics: Absorption, Distribution, Metabolism and Excretion of Cocaine. Before beginning discussion about pharmacokinetics or ADME of cocaine, table below summarizes the relationship of route of administration with onset of action, time taken to achieve peak effect, duration of action and half life. (Clarke, 1986) Route of administration Onset Peak effect (min.) Duration (min.) Half-life (min.) Inhalation 7s 1-5 20 40-60 Injection 15s 3-5 20-30 40-60 Nasal 3min 15 45-90 60-90 Oral 10min 60 60 60-90 Absorption: Absorption refers to movement of drug from site of administration into bloodstream.As with any drug, absorption of cocaine depends on various factors and varies considerably with them. Factors which influence drug absorption include; drug formulation, route of administration, lipid solubility, pH of the medium, blood supply and surface area available for absorption. As evident from tabulated figures above, cocaine differs greatly in onset of action varying between 7 seconds up to 10 minutes from one route of administration to another. This is a factor of absorption of drug which depends on route of administration. Each route is separately discussed below in greater details. (Clarke, 1986). Orally administered cocaine: Cocaine induces vasoconstriction in vessels supplying oral mucosa and resultant reduction in blood supply slows down its absorption by decreasing surface area from which drug is absorbed. Therefore when orally administered, drug is slowly absorbed into bloodstream, taking roughly 30 minutes. Absorption is also incomplete; roughly one third of administered dose is absorbed. Due to slow absorption, onset of action is also delayed and peak effect is, however, not achieved until about 50-90 minutes after administration. Effect is, however, longer lasting, roughly 60 minutes after attainment of peak effect. Another factor affecting absorption of orally administered cocaine is pH of the stomach. As previously mentioned, stomach acid hydrolyzes cocaine, resulting in inadequate and incomplete absorption. To improve absorption it is common practice to take cocaine along with an alkaline liquid to neutralize acidic pH. Insufflations: Insufflations results in coating of the mucosa covering sinuses with cocaine, from where it is absorbed. Absorption is similar to that from oral cavity, cocaine induced vasoconstriction beneath mucosa results in slow and incomplete absorption (30-60%). Efficiency of absorption increases with concentration of drug. According to a study, time taken to reach peak effect via this route of administration averages 14.6 minutes. Injection: Injected cocaine is directly administered into bloodstream eliminating need for absorption. According same study, as mentioned for insufflation, time taken to reach peak effect of cocaine through injection averaged 3.1 minutes, roughly five times less than time for insufflation. Smoking: Smoking crack delivers large quantities of the drug to the lungs, resultant absorption is rapid and effects created are comparable to intravenous administration. These effects, which are felt almost immediately after smoking, are intense and last for 5-10 minutes. According to Perez-Reyes et al, 1982, volunteers who smoked 50 mg of cocaine base in a controlled study experiment achieved rapidly elevated plasma cocaine level compared to intravenous cocaine administration. Distribution: Following absorption into bloodstream, cocaine is distributed, via blood, to all body tissues including vital organs like brain, lungs, liver, heart, kidneys and adrenals. It crosses both blood-brain and placental barrier. Being lipid soluble, it easily traverses biological membranes via simple diffusion. It is believed to accumulate in brain and adipose tissue with repeated administration, owing to its lipid nature. In an experiment, distribution and kinetics of cocaine in human body were studied using Positron Emission Topography (PET) technique with radioactively labeled (carbon-11) cocaine on 14 healthy male subjects. Rate of uptake and clearance were found to vary among organs. Following results were obtained for time, in minutes, taken by radioactively labeled cocaine to reach peak value in following organs: Lungs: 45 seconds. Heart and Kidneys: 2-3 minutes. Adrenals: 7-9 minutes. Liver: 10 minutes. Liver, which is the key site for metabolism of cocaine is where distribution is most sluggish, increasing the half-life of cocaine. The Journal of Nuclear Medicine ( 1992 ) Metabolism: As already mentioned, cocaine is primarily metabolised in liver. It is estimated to get metabolized within two hours of administration. Half-life varies between 0.7 1.5 hours (Clarke, 1986), depending on route of administration among various other factors. There are three possible routes for bio-transformation of cocaine. Ester linkages in cocaine are hydrolyzed by plasma pseudocholinesterases and hepatic enzymes, human liver carboxylesterase form 1 (hCE-1)and human liver carboxylesterase form 2 (hCE-2). Benzoyl group is eliminated to produce ecgonine methyl ester. This is the major route for metabolism of cocaine. A secondary route, suggested by Fleming et al. 1990, proposes spontaneous hydrolysis, possibly non-enzymatic, followed by demethylation to produce benzoylecgonine. N-demethylation of cocaine is a minor route which leads to formation of norcocaine. Final degradation of metabolites yields ecgonine. Principal inactive metabolites are; benzoylecgonine, ecgonine methyl ester, and ecgonine itself. Norcocaine is an active metabolite and may reveal itself in acute intoxication. Metabolism of cocaine may be influenced by a number of factors: Alcohol:When cocaine is co-administered with alcohol a compound called Cocaethylene is formed. Cocaethylene is associated with an increased risk of liver damage and premature death. Pregnancy. Liver disease. Aged men. Congenital cholinesterase deficiency. In all the aforementioned conditions, except alcohol, rate of cocaine metabolism is reduced, leading to elevated levels and duration of action of cocaine, enhancing its harmful effects of on the body. Following is a schematic representation of metabolic pathways of cocaine. According to Andrew (1997) have found that the continuous use of alcohol with cocaine produce cocaethylene which is similar in the action of cocaine but it has more blood stream concentration by three to five times than cocaine as a result of its high half life. Its much attractive to be used for abuse as a result of slower removal from the body. Different types of side effects are associated with cocaethylene like liver damage, seizure and immuno compromised functioning . Cocaethylene has more possibility for sudden death by 18 25 times than using cocaine alone . Butyrylcholinesterase (BChE) has been implicated as being important in metabolism of cocaine, even though it has limited capacity to fully hydrolyze cocaine. BChE is specially essential for cocaine detoxification. A lot of research has been done to study the effect of employing this enzyme in cocaine detoxification and in anti-cocaine medications. The rate at which human BChE hydrolyzes cocaine is slow; however, scientists at Eppley Institute and Department of Biochemistry and Molecular Biology, University of Nebraska Medical Center, Omaha, Nebraska, have developed a mutant (A328Y) of human butyrylcholinesterase, which promises four fold greater efficiency in accelerating cocaine metabolism. Elimination or excretion: 1-9% of cocaine is excreted unaltered in urine along with metabolites, ecgonine methyl ester, benzoylecgonine, and ecgonine. Unchanged cocaine may also be eliminated through GI tract and/or be excreted in saliva. Most of the parent drug is eliminated from plasma within 4 hours after administration but metabolites may remain detectable for up to 144 hours after administration. Elimination of cocaine via kidneys is enhanced by acidification of urine. As already mentioned, cocaine easily traverses placental barrier, and the active metabolite, norcocaine is believed to persist in amniotic fluid for up to 5 days. In lactating mothers, cocaine and benzoylecgonine are excreted into maternal milk and can be detected up to 36 hours after administration. In smokers, cocaine is rapidly eliminated through exhalation of vapor. Ambre J et.al (1988) In an experiment, the effects of chronic oral cocaine administration in healthy volunteer subjects with a history of cocaine abuse were investigated. There were sixteen daily sessions of oral cocaine administration while subjects were kept in a controlled clinical ward. In every session subjects received five equal doses of oral cocaine at one hour interval. Throughout sessions, cocain

Friday, October 25, 2019

What Is Depression? :: Psychology Emotions Papers

What Is Depression? "I feel like I can't do it anymore. There's just too much pressure. I just want people to leave me alone; let me sleep for a while. I hate school. I hate sports. I hate everyone. It's not worth it...life's not worth it. A couple pills and it could all end..." This is how I used to feel. Since 9 th grade I've had depression. As with so many other kids it went unnoticed. I did everything to cover up how I was feeling inside. I thought, "Maybe if I just forget about it, it will go away." I was wrong. I blew up. I cried and thought of killing myself on the spot. I couldn't take it anymore. There was so much anger, stress, and sadness, and I didn't know how to deal with it. I broke down crying in my coach's office one day because I got kicked off the baseball team and he had no idea what was going on. This was the straw that broke the camel's back. At first, all I could do was cry. After about 10 minutes of tears, I finally told my coach how I was feeling. I told him that every day I had to think of a reason to live. I told him I didn't want to deal with it anymore. He was the first person I shared my thoughts with. I still remember how he responded. The first thing out of his mouth was, "I had no idea. I th ought you lived the greatest life of anyone in this school." Then he just listened while I told my real life story; the life that I covered up. When I was finished we both just sat there. After a long silence he finally said, "You need to get some help..." Depression is a long-term or short-term feeling of sadness and usually has physical symptoms associated with it. This is the clinical definition given by the National Institute of Mental Health in their article, "Depression." Everyone feels sad at some point and it can last for a while.

Thursday, October 24, 2019

Digital Cinema

Scott McQuire Millennial fantasies As anyone interested in film culture knows, the last decade has witnessed an explosion of pronouncements concerning the future of cinema. Many are fuelled by naked technological determinism, resulting in apocalyptic scenarios in which cinema either undergoes digital rebirth to emerge more powerful than ever in the new millennium, or is marginalised by a range of ‘new media’ which inevitably include some kind of broadband digital pipe capable of delivering full screen ‘cinema quality’ pictures on demand to home consumers.The fact that the doubleedged possibility of digital renaissance or death by bytes has coincided with celebrations of the ‘centenary of cinema’ has undoubtedly accentuated desire to reflect more broadly on the history of cinema as a social and cultural institution. It has also intersected with a significant transformation of film history, in which the centrality of ‘narrative’ as th e primary category for uniting accounts of the technological, the economic and the aesthetic in film theory, has become subject to new questions.Writing in 1986 Thomas Elsaesser joined the revisionist project concerning ‘early cinema’ to cinema’s potential demise: ‘A new interest in its beginnings is justified by the very fact that we might be witnessing the end: movies on the big screen could soon be the exception rather than the rule’. 1 Of course, Elsaesser’s speculation, which was largely driven by the deregulation of television broadcasting in Europe in conjunction with the emergence of new technologies such as video, cable and satellite in the 1980s, has been contradicted by the decade long cinema boom in the multiplexed 1990s. It has also been challenged from another direction, as the giant screen ‘experience’ of large format cinema has been rather unexpectedly transformed from a bit player into a prospective force. However , in the same article, Elsaesser raised another issue which has continued to resonate in subsequent debates: Scott McQuire, ‘Impact Aesthetics: Back to the Future in Digital Cinema? ‘, Convergence: The Journal of Research into New Media Technologies, vol. 6, no. 2, 2000, pp. 41-61.  © Scott McQuire. All rights reserved.Deposited to the University of Melbourne ePrints Repository with permission of Sage Publications . 2 Few histories fully address the question of why narrative became the driving force of cinema and whether this may itself be subject to change. Today the success, of SF as a genre, or of directors like Steven Spielberg whose narratives are simply anthology pieces from basic movie plots, suggest that narrative has to some extent been an excuse for the pyrotechnics of IL;M. 3 Concern for the demise, if not of cinema per se, then of narrative in cinema, is widespread in the present.In the recent special ‘digital technology’ issue of Screen, Sean Cubitt noted a ‘common intuition among reviewers, critics and scholars that something has changed in the nature of cinema — something to do with the decay of familiar narrative and performance values in favour of the qualities of the blockbuster’. 4 Lev Manovich has aligned the predominance of ‘blockbusters’ with ‘digital cinema’ by defining the latter almost entirely in terms of increased visual special effects: ‘A visible sign of this shift is the new role which computer generated special effects have come to play in the Hollywood industry in the last few years.Many recent blockbusters have been driven by special effects; feeding on their popularity’. 5 In his analysis of Hollywood’s often anxious depiction of cyberspace in films such as The Lawn Mower Man (1992), Paul Young argues that ‘cyberphobic films overstress the power of the visual in their reliance on digital technology to produce spectacle at the exp ense of narrative’, and adds this is ‘a consequence that [Scott] Bukatman has argued is latent in all special effects’. A more extreme (but nevertheless common) view is expressed by film maker Jean Douchet: ‘[Today] cinema has given up the purpose and the thinking behind individual shots [and narrative], in favour of images — rootless, textureless images — designed to violently impress by constantly inflating their spectacular qualities’. 7 ‘Spectacle’, it seems, is winning the war against ‘narrative’ all along the line.Even a brief statistical analysis reveals that ‘special effects’ driven films have enjoyed enormous recent success, garnering an average of over 60% of the global revenue taken by the top 10 films from 1995-1998, compared to an average of 30% over the previous four years. 8 Given that the proportion of box office revenue taken by the top 10 films has held steady or increased slightl y in the context of a rapidly expanding total market, this indicates that a handful of special-effects films are generating huge revenues each year.While such figures don’t offer a total picture of the film industry, let alone reveal which films which will exert lasting cultural influence, they do offer a snapshot of contemporary cultural taste refracted through studio marketing budgets. Coupled to the recent popularity of paracinematic forms, such as large format and special venue films, the renewed emphasis on ‘spectacle’ over ‘narrative’ suggests another possible end-game for 3 inema: not the frequently prophesied emptying of theatres made redundant by the explosion of home-based viewing (television, video, the internet), but a transformation from within which produces a cinema no longer resembling its (narrative) self, but something quite other. Complementing these debates over possible cinematic futures is the fact that any turn to spectacular f ilm ‘rides’ can also be conceived as a return — whether renaissance or regression is less clear — to an earlier paradigm of film-making famously dubbed the ‘cinema of attraction’ by Tom Gunning.Gunning long ago signalled this sense of return when he commented: ‘Clearly in some sense recent spectacle cinema has re-affirmed its roots in stimulus and carnival rides, in what might be called the Spielberg-Lucas-Coppola cinema of effects’. 9 For Paul Arthur, developments in the 1990s underline the point: The advent of Imax 3-D and its future prospects, in tandem with the broader strains of a New Sensationalism, provide an occasion to draw some connections with the early history of cinema and the recurrent dialectic between the primacy of the visual and, for lack of a better term, the sensory. 0 In what follows here, I want to further consider the loops and twists of these debates, not so much with the grand ambition of resolving them, b ut firstly of adding some different voices to the discussion — particularly the voices of those involved in film production. 11 My intention is not to elevate empiricism over theory, but to promote dialogue between different domains of film culture which meet all too rarely, and, in the process, to question the rather narrow terms in which ‘digital cinema’ has frequently entered recent theoretical debates.Secondly, I want to consider the relation between ‘narrative’ and ‘spectacle’ as it is manifested in these debates. My concern is that there seems to be a danger of confusing a number of different trajectories — such as cinema’s on-going efforts to demarcate its ‘experience’ from that of domestic entertainment technologies, and the turn to blockbuster exploitation strategies —and conflating them under the heading of ‘digital cinema’.While digital technology certainly intersects with, and si gnificantly overlaps these developments, it is by no means co-extensive with them. ‘Spectacular sounds’: cinema in the digital domain Putting aside the inevitable hype about the metamorphosis of Hollywood into ‘Cyberwood’, like many others I am convinced that digital technology constitutes a profound revolution in cinema, primarily because of its capacity to cut across all 4 sectors of the industry simultaneously, affecting film production, narrative conventions and audience experience.In this respect, the only adequate point of reference for the depth and extent of current changes are the transformations which took place with the introduction of synchronised sound in the 1920s. However, while the fundamental level at which change is occurring is widely recognised, it has been discussed primarily in terms of the impact of CGI (computer-generated imaging) on the film image. A more production-oriented approach would most likely begin elsewhere; with what Phil ip Brophy has argued is among ‘the most overlooked aspects of film theory and criticism (both modern and postmodern strands)’ — sound. 2 A brief flick through recent articles on digital cinema confirms this neglect: Manovich locates ‘digital cinema’ solely in a historical lineage of moving pictures; none of the articles in the recent Screen dossier mention sound, and even Eric Faden’s ‘Assimilating New Technologies: Early Cinema, Sound and Computer Imaging’ only uses the introduction of synchronised sound as an historical analogy for discussing the contemporary effect of CGI on the film image13. While not entirely unexpected, this silence is still somewhat urprising, given the fact that digital sound technology was adopted by the film industry far earlier and more comprehensively than was CGI. And, at least until the early 1990s with films like Terminator 2 (1991) and Jurassic Park (1993), the effect on audience experience was arg uably far greater than was digital imaging. Dominic Case [Group Services and Technology Manager at leading Australian film processor Atlab] argued in 1997: I am more and more convinced that the big story about film technology as far as audiences are concerned in the past few years has been sound.Because, although you can do fancy digital things, the image remains glued to that bit of screen in front of your eyes, and it’s not really any bigger†¦ But the sound has gone from one woolly sound coming from the back of the screen with virtually no frequency range or dynamic range whatsoever †¦ to something that fills the theatre in every direction with infinitely more dynamic range and frequency range. To me, that’s an explosion in experience compared to what you are seeing on the screen.However, the visual bias of most film theory is so pervasive that this transformation often passes unremarked. Part of the problem is that we lack the necessary conceptual armature : there are no linkages which pull terms such as 5 ‘aural’ or ‘listener’ into the sort of semantic chain joining spectacle and spectator to the adjective ‘spectacular’. Film sound-mixer Ian McLoughlin notes: Generally speaking, most people are visually trained from birth. †¦ Very few people are trained to have a aural language and, as a result there isn't much discussion about the philosophy of the sound track. .. There has been very, very little research done into the psycho-acoustic effects of sound and the way sound works sociologically on the audience. 14 Compounding this absence is the fact that the digital revolution in sound is, in many respects, the practical realisation of changes initiated with the introduction of Dolby Stereo in 1975. (On the other hand, the fact that CGI entered a special effects terrain already substantially altered by techniques of motion control, robotics and animatronics didn’t prevent critical atten tion to it. Four-track Dolby stereo led to a new era of sound experimentation beginning with films such as Star Wars (1977) and Close Encounters of the Third Kind (1977). As renowned sound mixer Roger Savage (whose credits include Return of the Jedi, 1983; Shine, 1996; and Romeo + Juliet, 1996) recalls: ‘Prior to that, film sound hadn’t changed for probably 30 years. It was Mono Academy †¦ Star Wars was one of the first films that I can remember where people started coming out of the theatre talking about the sound track’. 5 While narrative sound effects such as dialogue and music were still generally concentrated in the front speakers, the surround sound speakers became the vehicles for a new range of ‘spectacular’ sound effects. In particular, greater emphasis was given to boosting low frequency response, explicitly mirroring the amplified ambience of rock music. There was also greater attention given to the ‘spatialisation’ of di screte sound elements within the theatre.As Rich Altman has argued, these developments presented a significant challenge to one of the fundamental precepts of classical Hollywood narrative: the unity of sound and image and the subservience of sound effects to narrative logic: Whereas Thirties film practice fostered unconscious visual and psychological spectator identification with characters who appear as a perfect amalgam of image and sound, the Eighties ushered in a new kind of visceral identification, dependent on the sound system’s overt ability, through bone-rattling bass and unexpected surround effects, to cause spectators to vibrate — quite literally — with the entire narrative space.It is thus no longer the eyes, the ears and the brain that alone initiate identification and maintain contact with a sonic 6 source; instead, it is the whole body that establishes a relationship, marching to the beat of a different woofer. Where sound was once hidden behind t he image in order to allow more complete identification with the image, now the sound source is flaunted, fostering a separate sonic identification contesting the limited rational draw of the image and its characters. 16 Altman’s observation is significant in this context, inasmuch as it suggests that the dethroning of a certain model of narrative cinema had begun prior to the digital threshold, and well before the widespread use of CGI.It also indicates the frontline role that sound took in the film industry’s initial response to the incursions of video : in the 1980s the new sound of cinema was a primary point of differentiation from domestic image technologies. However, while Dolby certainly created a new potential for dramatic sound effects, in practice most film makers remained limited by a combination of logistical and economic constraints. In this respect, the transition to digital sound has been critical in creating greater latitude for experimentation within e xisting budget parameters and production time frames. In terms of sound production, Roger Savage argues: ‘The main advantages in digital are the quality control, the speed and the flexibility’. This is a theme which is repeated with regard to the computerisation of other areas of film making such as picture editing and CGI. ) Enhanced speed, flexibility and control stem from a reduction in the need for physical handling and a refinement of precision in locating and manipulating individual elements. In sound production, libraries of analogue tape reels each holding ten minutes of sound have given way to far more compact DAT tapes and hard drive storage. The entire production process can now often be realised on a single digital workstation. There is no need for a separate transfer bay, and, since digital processing involves the manipulation of electronic data, there is no risk of degrading or destroying original recordings by repeated processing.Once the sounds are catal ogued, digital workstations grant random access in a fraction of a second (eliminating tape winding time), and, unlike sprocket-based sound editing, all the tracks which have been laid can be heard immediately in playback. The creative pay-off is an enhanced ability to add complexity and texture to soundtracks. In terms of sound reproduction, the most marked change resulting from six track digital theatre systems is improved stereo separation and frequency response which assists better music reproduction in theatres — a change which goes hand in glove with the increased prominence that music and soundtracks have assumed in promoting and marketing films in recent years. 7The enhanced role of sound in cinema is even more marked for large format films which, because of their high level of visual detail, demand a correspondingly high level of audio detail. Ian McLoughlin (who, amongst many other things, shares sound mixing credits with Savage for the large-format films Africaâ₠¬â„¢s Elephant Kingdom, 1998 and The Story of a Sydney, 1999) comments: If you look at the two extremes of image technology, if you look at television, and then you look at something like Imax, the most interesting difference is the density of the sound track that is required with the size of the picture. When you’re doing a TV mix, you try to be simple, bold. You can’t get much in or otherwise it just becomes a mess.With 35mm feature films you're putting in 10, 20 times more density and depth into the sound track as compared to television, and †¦ when you go to Imax, you need even more. McLoughlin also makes a significant point concerning the use (or abuse) of digital sound: When digital first came out and people found that they could make a enormously loud sound tracks, everyone wanted enormously large sound tracks. †¦ Unfortunately some people who present films decided that the alignment techniques that companies like Dolby and THX have worked out arenâ₠¬â„¢t to their liking and they think audiences like a lot of sub-base and so they sometimes wind that up. †¦ [S]uddenly you’ve got audiences with chest cavities being punched due to the amount of bottom end. †¦Dolby and screen producers and screen distributors in America have actually been doing a lot of research into what they are calling the ‘annoyance factor’ of loud sound tracks. Because audiences are getting turned off by overly jarring, overly sharp, soundtracks. This comment is worth keeping in mind for two reasons. Firstly, it underlines the fact that the image is by no means the only vehicle for producing cinematic affect: in this sense, ‘impact aesthetics’ offers a more apt description of the trajectory of contemporary cinema than ‘spectacle’. Secondly, it warns against making hasty generalisations when assessing the long-term implications of CGI.While digital imaging undoubtedly represents a significant paradigm shif t in cinema, it is also feasible that the 1990s will eventually be seen more as a teething period of ‘gee whizz’ experimentation with the new digital toolbox, which was gradually turned towards other (even more ‘narrative’) ends. (The way we now look at early sound films is instructive: while contemporary audiences were fascinated by the mere 8 fact that pictures could ‘talk’, in retrospect we tend to give more weight to the way sound imposed new restrictions on camera movement, location shooting and acting style). Painting with light In contrast to the relative dearth of attention given to changes in areas such as sound and picture editing, digital manipulation of the film image has received massive publicity.While this is partly the result of deliberate studio promotion, it also reflects the profound changes in cinematic experience that computers have set in train. When we can see Sam Neil running from a herd of dinosaurs — in other wo rds, when we see cinematic images offering realistic depictions of things we know don’t exist — it is evident that the whole notion of photo-realism which has long been a central plank of cinematic credibility is changing. But how should this change be understood? Is it simply that ‘live action’ footage can now be ‘supplemented’ with CG elements which replace earlier illusionistic techniques such as optical printing, but leave cinema’s unique identity as an ‘art of recording’ intact? Or is a new paradigm emerging in which cinema becomes more like painting or animation?Lev Manovich has recently taken the latter position to an extreme, arguing that, ‘Digital cinema is a particular case of animation which uses live-action footage as one of its many elements’, and concluding: ‘In retrospect, we can see that twentieth century cinema’s regime of visual realism, the result of automatically recording visua l reality, was only an exception, an isolated accident in the history of visual representation†¦ ’. 17 While I suspect that Manovich significantly underestimates the peculiar attractions of ‘automatic recording’ (which produced what Walter Benjamin termed the photograph’s irreducible ‘spark of contingency’, what Barthes ontologised as the hotographic punctum), it is clear the referential bond linking camera image to physical object has come under potentially terminal pressure in the digital era. However, any consideration of ‘realism’ in cinema is immediately complicated by the primacy of fictional narrative as the dominant form of film production and consumption. Moreover, cinema swiftly moved from adherence to the ideal of direct correspondence between image and object which lay at the heart of classical claims to photographic referentiality. ‘Cheating’ with the order of events, or the times, locations and sett ings in which they occur, is second nature to film-makers. By the time cinema ‘came of age’ in the picture palace of the 1920s, a new logic of montage, shot matching and continuity had coalesced into the paradigm of 9 classical narrative’, and cinematic credibility belonged more to the movement of the text rather than the photographic moment — a shift Jean-Louis Commolli has neatly described in terms of a journey from purely optical to psychological realism. 18 Within this paradigm all imaginable tactics were permissible in order to imbue pro-filmic action with the stamp of cinematic authority — theatrical techniques such as performance, make-up, costumes, lighting and set design were augmented by specifically cinematic techniques such as stop motion photography and rear projection, as well as model-making and matte painting which entered the screen world via the optical printer.Given this long history of simulation, the digital threshold is perhaps best located in terms of its effect on what Stephen Prince has dubbed ‘perceptual realism’, rather than in relation to an abstract category of ‘realism’ in general. Prince argues: A perceptually realistic image is one which structurally corresponds to the viewer’s audio-visual experience of three-dimensional space †¦ Such images display a nested hierarchy of cues which organise the display of light, colour, texture, movement and sound in ways that correspond to the viewer’s own understanding of these phenomena in daily life. Perceptual realism, therefore, designates a relationship between the image on film and the spectator, and it can encompass both unreal images and those which are referentially realistic. Because of this, unreal images may be referentially fictional but perceptually realistic. 19I have emphasised Prince’s evocation of fidelity to ‘audio-visual experience’ because it underlines the extent to which t he aim of most computer artists working in contemporary cinema is not simply to create high resolution images, but to make these images look as if they might have been filmed. This includes adding various ‘defects’, such as film grain, lens flare, motion blur and edge halation. CG effects guru Scott Billups argues that film makers had to ‘educate’ computer programmers to achieve this end: For years we were saying: ‘Guys, you look out on the horizon and things get grayer and less crisp as they get farther away’. But those were the types of naturally occurring event structures that never got written into computer programs.They’d say ‘Why do you want to reduce the resolution? Why do you want to blur it? ’. 20 10 By the 1990s many software programs had addressed this issue. As Peter Webb (one of the developers of Flame) notes: Flame has a lot of tools that introduce the flaws that one is trained to see. Even though we donâ€℠¢t notice them, there is lens flare and motion blur, and the depth of field things, and, if you don’t see them, you begin to get suspicious about a shot. 21 In other words, because of the extent to which audiences have internalised the camera’s qualities as the hallmark of credibility, contemporary cinema no longer aims to mime ‘reality’, but ‘camera-reality’.Recognising this shift underlines the heightened ambivalence of realism in the digital domain. The film maker’s ability to take the image apart at ever more minute levels is counterpointed by the spectator’s desire to comprehend the resulting image as ‘realistic’ — or, at least, equivalent to other cine-images. In some respects, this can be compared to the dialectic underlying the development of montage earlier this century, as a more ‘abstract’ relation to individual shots became the basis for their reconstitution as an ‘organicâ€℠¢ text. But instead of the fragmentation and re-assemblage of the image track over time, which founded the development of lassical narrative cinema and its core ‘grammatical’ structures such as shot/reverse shot editing, digital technology introduces a new type of montage: montage within the frame whose prototype is the real time mutation of morphing. However, while ‘perceptual realism’ was achieved relatively painlessly in digital sound, the digital image proved far more laborious. Even limited attempts to marry live action with CGI, such as TRON (1982) and The Last Starfighter (1984) proved unable to sustain the first wave of enthusiasm for the computer. As one analyst observed: ‘The problem was that digital technology was both comparatively slow and prohibitively expensive. In fact, workstations capable of performing at film resolution were driven by Cray super-computers’. 2 It is these practical exigencies, coupled to the aesthetic disjunct ion separating software programmers from film makers I noted above, rather than a deeply felt desire to manufacture a specifically electronic aesthetic, which seems to underlie the ‘look’ of early CGI. 23 Exponential increases in computing speed, coupled to decreases in computing cost, not only launched the desktop PC revolution in the mid-1980s, but made CGI in film an entirely different matter. The second wave of CGI was signalled when Terminator 2: Judgement Day (1991) made morphing a household word. 24 Two 11 years later the runaway box-office success of Jurassic Park (1993) changed the question from whether computers could be effectively used in film making to how soon this would happen. The subsequent rash of CGI-driven blockbusters, topped by the billion dollar plus gross of Cameron’s Titanic (1997), has confirmed the trajectory.Cameron is one of many influential players who argue that cinema is currently undergoing a fundamental transformation: ‘Weà ¢â‚¬â„¢re on the threshold of a moment in cinematic history that is unparalleled. Anything you imagine can be done. If you can draw it, if you can describe it, we can do it. It’s just a matter of cost’. 25 While this claim is true at one level — many tricky tasks such as depicting skin, hair and water, or integrating CGI elements into live action images shot with a hand-held camera, have now been accomplished successfully — it is worth remembering that ‘realism’ is a notoriously slippery goal, whether achieved via crayon, camera or computer.Dennis Muren’s comments on his path-breaking effects for Jurassic Park (which in fact had only 5 to 6 minutes of CGI and relied heavily on models and miniatures, as did more recent ‘state of the art’ blockbusters such as The Fifth Element, 1997 and Dark City, 1998) bear repeating: ‘Maybe we’ll look back in 10 years and notice that we left things out that we didn’t kn ow needed to be there until we developed the next version of this technology’. Muren adds: In the Star Wars films you saw lots of X-wings fighters blow up, but these were always little models shot with high-speed cameras. You’ve never seen a real X-wing blow up, but by using CGI, you might just suddenly see what looks like a full-sized X-wing explode. It would be all fake of course, but you’d see the structure inside tearing apart, the physics of this piece blowing off that piece. Then you might look back at Star Wars and say, ‘That looks terrible’. 26Clearly, George Lucas shared this sentiment, acknowledging in 1997 that ‘I’m still bugged by things I couldn’t do or couldn’t get right, and now I can fix them’. 27 The massive returns generated by the ‘digitally enhanced’ Star Wars trilogy raises the prospect of a future in which blockbuster movies are not re-made with new casts, but perpetually updated w ith new generations of special effects. Stop the sun, I want to get off Putting aside the still looming question of digital projection, the bottom line in the contemporary use of digital technology in cinema is undoubtedly ‘control’: 12 particularly the increased control that film makers have over all the different components of image and sound tracks.Depending on a film’s budget, the story no longer has to work around scenes which might be hard to set up physically or reproduce photo-optically— they are all grist to the legions of screen jockeys working in digital post-production houses. George Lucas extols the new technology for enhancing the ability to realise directorial vision: I think cinematographers would love to have ultimate control over the lighting; they’d like to be able to say, ‘OK, I want the sun to stop there on the horizon and stay there for about six hours, and I want all of those clouds to go away. Everybody wants that kind of control over the image and the storytelling process. Digital technology is just the ultimate version of that. 28A direct result of digital imaging and compositing techniques has been an explosion of films which, instead of ‘fudging’ the impossible, revel in the capacity to depict it with gripping ‘realism’: Tom Cruise’s face can be ripped apart in real time (Interview with the Vampire, 1994), the Whitehouse can be incinerated by a fireball from above (Independence Day, 1996), New York can be drowned by a tidal wave, or smashed by a giant lizard(Deep Impact, Godzilla, 1998). But, despite Lucas’ enthusiasm, many are dubious about where the new primacy of special effects leaves narrative in cinema. The argument put forward by those such as Sean Cubitt and Scott Bukatman is that contemporary special effects tend to displace narrative insofar as they introduce a disjunctive temporality evocative of the sublime.Focusing on Doug Trumbull’s work, Bukatman emphasises the contemplative relationship established between spectator and screen in key effects scenes (a relationship frequently mirrored by on-screen characters displaying their awe at what they– and ‘we’ – are seeing. )29 Cubitt suggests that similar ‘fetishistic’ moments occur in songs such as Diamonds are a Girl’s Best Friend, where narrative progress gives way to visual fascination. His example is drawn from a strikingly similar terrain to that which inspired Laura Mulvey’s well-known thesis on the tension between voyeurism and scopophilia in classical narrative cinema: Mainstream film neatly combined spectacle and narrative. (Note, however, in the musical song-and-dance numbers break the flow of the diegesis).The presence of woman is an indispensable element of spectacle in normal narrative film, yet her visual presence tends to work against the development of a story line, to freeze the flow of action in moments of erotic contemplation. 30 13 This connection was also made by Tom Gunning in his work on the early ‘cinema of attraction’: ‘As Laura Mulvey has shown in a very different context, the dialectic between spectacle and narrative has fueled much of the classical cinema’. 31 In this respect, a key point to draw from both Mulvey and Gunning is to recognise that they don’t conceive the relationship between spectacle and narrative in terms of opposition but dialectical tension. 32 This is something that other writers have sometimes forgotten.Presenting the issue in terms of an opposition (spectacle versus narrative) in fact recycles positions which have been consistently articulated (and regularly reversed) throughout the century. In the 1920s, avant-garde film makers railed against ‘narrative’ because it was associated primarily with literary and theatrical scenarios at the expense of cinematic qualities (Gunning begins his ‘Cine ma of Attraction’ essay with just such a quote from Fernand Leger). Similar concerns emerged with debates in France over auteur theory in the 1950s, where the literary qualities of script were opposed to the ‘properly cinematic’ qualities of mise-en-scene.In the 1970s, the ‘refusal of narrative’ which characterised much Screen theory of the period, took on radical political connotations. Perhaps as a reaction to the extremity of pronouncements by those such as Peter Gidal, there has been a widespread restoration of narrative qualities as a filmic ‘good object’ in the present. However, rather than attempting to resolve this split in favour of one side or the other, the more salient need is to examine their irreducible intertwining: what sort of stories are being told, and what sort of spectacles are being deployed in their telling? While it is easy to lament the quality of story-telling in contemporary blockbusters, few critics seriously maintain that such films are without narrative.A more productive framework is to analyse why explicitly ‘mythological’ films such as the Star Wars cycle have been able to grip popular imagination at this particular historical conjuncture, marrying the bare bones of fairy-tale narrative structures to the inculcation of a specific type of special effects driven viewing experience. (To some extent, ths is Bukatman’s approach in his analysis of special effects). In this context, it is also worth remembering that, despite the quite profound transformations set in train by the use of digital technology in film making, there has thus far been little discernible effect on narrative in terms of structure or genre. The flirtation with ‘non-linear’ and ‘interactive’ films was a shooting star which came and went with the CD-ROM, while most contemporary blockbusters conform smoothly to established cine-genres (sci-fi, horror, disaster and action- 14 dventure predominating), with a significant number being direct re-makes of older films done ‘better’ in the digital domain. One of the more interesting observations about possible trends in the industry is put forward by James Cameron, who has argued that digital technology has the potential to free film makers from the constraints of the ‘A’ and ‘B’ picture hierarchy: [I]n the ’40s you either had a movie star or you had a B-movie. Now you can create an A-level movie with some kind of visual spectacle, where you cast good actors, but you don’t need an Arnold or a Sly or a Bruce or a Kevin to make it a viable film. 33 However, Cameron himself throws doubt on the extent of this ‘liberation’ by underlining the industrial nature of digital film production. 4 In practice, any film with the budget to produce a large number of cutting edge special effects shots is inevitably sold around star participation, as well as specta cle (as were films such as The Robe, 1953 and Ben Hur, 1926). This point about the intertwining of narrative and spectacle is re-inforced if we look at developments in large-format film, an area frequently singled out for its over-dependence on screen spectacle to compensate for notoriously boring ‘educational’ narrative formats. Large-format (LF) cinema is currently in the throes of a significant transformation The number of screens worldwide has exploded in the last four years (between 1995 and January 1999, the global LF circuit grew from 165 to 263 theatres. By January 2001, another 101 theatres are due to open, taking the total to 364, an increase of 120% in 6 years).More significantly, the majority of new screens are being run by commercial operators rather than institutions such as science museums. These new exhibition opportunities, coupled to the box-office returns generated by films such as Everest (the 15th highest grossing film in the USA in 1998, despite ap pearing on only 32 screens) has created significant momentum in the sector for the production of LF films capable of attracting broader audiences. For some producers, this means attempting to transfer the narrative devices of dramatic feature films onto the giant screen, while others argue that the peculiarities of the medium means that LF needs to stick with its proven documentary subjects.However, most significantly in this context, none dispute the need for the sector to develop better narrative techniques if it is to grow and prosper, particularly by 15 attracting ‘repeat’ audiences. In many respects, the LF sector is currently in a similar position to cinema in the 1900s, with people going to see the apparatus rather than a specific film, and the ‘experience’ being advertised largely on this basis. While it would be simplistic to see current attempts to improve the narrative credentials of LF films as a faithful repetition of the path that 35mm cinema took earlier this century, since most production is likely to remain documentary-oriented, it would be equally as foolish to ignore the cultural and commercial imperatives which still converge around telling a ‘good story’. 5 Distraction and the politics of spectacle Despite the current rash of digitally-inspired predictions, narrative in film is unlikely to succumb to technological obsolescence. But nor will spectacle be vanquished by a miraculous resurgence of ‘quality’ stories. A corollary of a dialectical conception of the interrelationship between narrative and spectacle is that neither should be seen simply as ‘good’ or ‘bad’ objects in themselves. For Mulvey, spectacle (exemplified by close-ups which turn woman’s face and body into a fetish), as well as the more voyeuristic strategy of narrative, were both attuned to the anxious imagination of patriarchal culture in classical cinema.Both were techniques for negotiatin g the threat of castration raised by the image of woman, an image classical cinema simultaneously desired and sought to circumscribe or punish. Nevertheless, even within this heavily constrained context, ‘spectacle’ could also assume a radical function by ‘interrupting’ the smooth functioning of narrative, disturbing the rules of identification and the systematic organisation of the look within the text. (This is the gist of her comparison between the films of von Sternberg, which privilege a fetish image of Dietrich over narrative progress, and those of Hitchcock which more closely align the viewer with the male protagonist). Can spectacle still exert a ‘progressive’ function in contemporary cinema?While most critics answer this question negatively without even posing it, Paul Young is unusual in granting a measure of radical effect to the renewed primacy of spectacle. Young draws on Miriam Hansen’s account of the ‘productive ambi guity’ of early cinema, in which the lack of standardised modes of exhibition, coupled to reliance on individual attractions, gave audiences a relative freedom to interpret what they saw, and established cinema as (potentially) an alternative public sphere. He takes this as support for his argument that contemporary ‘spectacle’ cinema constitutes an emergent challenge to ‘Hollywood’s institutional identity’. 36 16 Young’s analysis contrasts markedly with Gunning’s earlier description of the ‘cinema of effects’ as ‘tamed attractions’. 7 Nevertheless both share some common ground: Young’s reference to the ‘productive ambiguity’ of early cinema, like Gunning’s rather oblique and undeveloped reference to the ‘primal power’ of attraction, draws nourishment from Siegfried Kracauer’s early writings on the concept of distraction. In the 1920s, Kracauer set up Ã¢â‚¬Ë œdistraction’ as a counterpoint to contemplation as a privileged mode of audience reception, seeing it as embodying a challenge to bourgeois taste for literary-theatrical narrative forms, and also as the most compelling mode of presentation to the cinema audience of their own disjointed and fragmented conditions of existence. 38 While distraction persisted as a category used by Walter Benjamin in his ‘Artwork’ essay of the mid1930s, by the 1940s Kracauer seemed to have revised his position.As Elsaesser has pointed out, this re-appraisal was at least partly a re-assessment of the ‘productive ambiguity’ which had characterised social spaces such as cinema; by the 1940s distraction and spectacle had been consolidated into socially dominant forms epitomised by Hollywood on the one hand and fascism on the other. 39 If Kracauer’s faith that the 1920s audience could symptomatically encounter ‘its own reality’ via the superficial glamour of movie stars rather than the putative substance of the era’s ‘high culture’ was already shaken by the 1940s, what would he make of the post-pop art, postmodern 1990s? The extent to which surface elements of popular culture have been esthetically ‘legitimated’ without any significant transformation of corresponding political and economic values suggests the enormous difficulties facing those trying to utilise spectacle as a ‘progressive’ element in contemporary culture. However, it is equally important to acknowledge that this problem cannot be resolved simply by appealing to ‘narrative’ as an antidote. While the terms remain so monolithic, the debate will not progress beyond generalities. In this respect, Kracauer’s work still offers some important lessons to consider in the present. Here, by way of conclusion, I want to sketch out a few possible lines of inquiry. On the one hand, his concept of the ‘mass orna ment’ indicates that any turn, or return, to spectacle in cinema needs to be situated in a wider social context. 0 Spectacle is not simply a matter of screen image, but constitutes a social relation indexed by the screen (something Guy Debord underlined in the 1960s). Developments in contemporary cinema need to be related to a number of other trajectories, including cinema’s on-going endeavours to distinguish its ‘experience’ 17 from that of home entertainment, as well as the proliferation of spectacle in social arenas as diverse as sport (the Olympic games), politics (the dominance of the cult of personality in all political systems) and war (the proto-typical ‘media-event’). On the other hand, the specific forms of spectacle mobilised in contemporary cinema need to be examined for the extent to which they might reveal (in Kracauer’s terms) the ‘underlying meaning of existing conditions’.Kracauer’s analysis of cinem a in the 1920s situated the popularity of a certain structure of viewing experience in relation to the rise of a new class (the white collar worker). In contemporary terms, I would argue that the relevant transformation is the process of ‘globalisation’. While this is a complex, heterogeneous and uneven phenomenon, a relevant aspect to consider here is Hollywood’s increasing reliance on overseas markets, both for revenue, and, more importantly, for growth. 41 In this context, the growing imperative for films to ‘translate’ easily to all corners and cultures of the world is answered by building films around spectacular action setpieces. Equally as ignificantly, the predominant themes of recent special effects cinema— the destruction of the city and the mutation or dismemberment of the human body — are symptomatic of the underlying tensions of globalisation, tensions exemplified by widespread ambivalence towards the socio-political effect s of speed and the new spatio-temporal matrices such as cyberspace. 42 The most important cinematic manifestations of these anxious fascinations are not realised at the level of narrative ‘content’ (although they occasionally make themselves felt there), but appear symptomatically in the structure of contemporary viewing experience. The repetition of awe and astonishment repeatedly evoked by ‘impossible’ images as the currency of today’s ‘cutting edge’ cinema undoubtedly functions to prepare us for the uncertain pleasures of living in a world we suspect we will soon no longer recognise: it is not simply ‘realism’ but ‘reality’ which is mutating in the era of digital economy and the Human Genome Project.If this turn to spectacle is, in some respects, comparable to the role played by early cinema in negotiating the new social spaces which emerged in the industrial city remade by factories and department stores, el ectrification and dynamic vehicles, it also underscores the fact that the ‘death’ of camera realism in the late twentieth century is a complex psycho-social process, not least because photo-realism was always less an aesthetic function than a deeply embedded social and political relation. 43 18 Finally, I would argue that it is important not to subsume all these filmic headings under the single rubric of ‘digital’. There is a need to acknowledge, firstly, that digital technology is used far more widely in the film industry than for the production of blockbusters and special effects (for example, it is the new industry standard in areas such as sound production and picture editing).Moreover, as Elsaesser has argued recently, technology is not the driving force: ‘In each case, digitisation is ‘somewhere’, but it is not what regulates the system, whose logic is commercial, entrepreneurial and capitalist-industrialist’44 What the digit al threshold has enabled is the realignment of cinema in conformity with new demands, such as ‘blockbuster’ marketing blitzes constructed around a few spectacular image sequences of the kind that propelled Independence Day to an US$800m gross. It has rejuvenated cinema’s capacity to set aesthetic agendas, and, at the same time, restored its status as a key player in contemporary political economy. In this context, one aspect of the digital threshold deserves further attention. In the 1990s, product merchandising has become an increasingly important part of financing the globalised film industry.While some would date this from Star Wars, Jurassic Park offers a more relevant point of reference: for the first time, audiences could see on screen, as an integral part of the filmic diegesis, the same commodities they could purchase in the cinema foyer. As Lucie Fjeldstad (then head of IBM’s multimedia division) remarked at the time (1993) : ‘Digital conten t is a return-on-assets goldmine, because once you create Terminator 3, the character, it can be used in movies, in theme-park rides, videogames, books, educational products’. 45 Digital convergence is enacted not simply in the journey from large screen to small screen: the same parameters used in designing CG characters for a film can easily be transmitted to off-shore factories manufacturing plastic toys.

Wednesday, October 23, 2019

World War 1 and Peace Plan

———————– President Woodrow Wilson had a plan for peace in the war known as the Fourteen Points. It was also called â€Å"Peace Without Victory. † The plan meant to prevent international problems from causing another war. To promote his plan for peace, Wilson visited Paris, London, Milan, and Rome in Europe. However, the Allies were against Wilson’s Fourteen Points. The Allies wanted to punish Germany for the war. One idea of Wilson’s peace plan was an end to secret treaties. One issue that caused World War I that was addressed in that idea was entangling alliances between the countries.Another idea was a limit on weapons. The issue that lead to World War I that was addressed in that idea was militarism. The most important of Wilson’s Fourteen Points was a League Of Nations, to protect the independence of all countries. The issue that caused World War I that was addressed in that was also the entanglin g alliances. The final treaty that was agreed on was the Treaty Of Versailles. Germany thought they were getting Wilson’s peace plan but instead, they got this. The Treaty of Versailles consisted of many ways of punishing Germany.The war guilt clause was a part of the treaty. The war guilt clause stated that Germany had to accept the blame of causing the war. Another part noted that Germany had to give up it’s colonies. It also consisted of the fact that Germany had to pay all war costs. This means they had to pay their war cost, as well as the Allies‘ war costs, which was over $200 billion. The Allies also wanted to disarm Germany. This meant that they wanted to cut off Germany’s army and navy, so they wouldn’t be able to fight another war for a long time. Did the Treaty of Versailles lead to World War II?The Treaty of Versailles was the way of the Allies to punish Germany. To surrender, Germany accepted Wilson’s Fourteen Points but Germany didn’t get anything close to a peace plan. The treaty resulted in bitterness, betrayal, and hatred between Germany and the Allies. Italy was also not happy with the treaty because they did not get the territory they were promised for helping the Allies. Therefore, the Treaty of Versailles did â€Å"plant the seeds† of World War II because Germany and Italy wanted revenge. Priyanka Dongare Social Studies Mr. Ranalli 2 April 9, 2010World War I was one of the bloodiest wars fought in history. The two sides were the Allies and the Central Powers. The Allied countries were Great Britain, France, Russia, Serbia, and Italy. The Central Powers were Austria Hungary, Germany, Bulgaria, and the Ottoman Empire. There were many reasons the war started, such as militarism, entangling alliances between countries, imperialism, and nationalism. Militarism was the policy of building up a strong army and navy to prepare for war. Alliances were agreements between nations in which they pro mised to support one another in case of attack.Imperialism was the policy of powerful countries seeking to control weaker countries. Nationalism was extreme feelings of pride in one’s country. At the beginning of the war, the United States was a neutral country. However, the neutrality of the United States was tested and it entered the war. The entry of the United States into World War 1 marked the turning point of the war and changed the outcome. However the postwar goals of the Allies may have lead to the outbreak of World War II. For most of World War I, the United States was a neutral country. One reason for neutrality was President Washington.Washington warned to stay out of European affairs and entangling alliances. Another reason for the United States to stay neutral was the Atlantic Ocean, which acted as a natural barrier between the United States and Europe. The neutrality of the United States was soon tested. One reason why the United States entered World War I was Germany’s strategy during the war. Germany used submarines called u-boats and sunk any ship without warning near the Allies’ countries, which was the war zone. Therefore, Germany cut off supplies to Allies, while the United States was supplying the Allies.Germany also sank the Lusitania, which was a British passenger ship, in which 128 Americans died. The Zimmermann Telegram also enraged many Americans. The Zimmermann telegram was sent by Germany’s foreign secretary, Arthur Zimmermann, to the German minister in Mexico. The secret note urged Mexico to attack the United States if the United States declared war on Germany. In return, Germany would help Mexico win back it’s lost territories from the United States, such as Texas, Arizona, and New Mexico. Therefore, the U. S. entered the war on the side of the Allies.