Saturday, January 25, 2020

Statistical techniques for cryptanalysis

Statistical techniques for cryptanalysis Introduction: Cryptography is the art of writing messages in code or cipher, to disguise, and thereby secure the content of a particular stream of text. When encrypted, a plain text message can be revealed only through the use of the key used to encode the cipher. Cryptography does not mask the existence of the message, but does disguise its content [1]. In contrary, cryptanalysis is the art of recovering the plaintext of a message without access to the key. Successful cryptanalysis may recover the plaintext or the key for a specific ciphertext [2]. There are five general types of cryptanalytic attacks:- 1. Ciphertext-only attack: In this type of attack, the cryptanalyst has a series of cipher texts encrypted using the same encryption algorithm. Then, the cryptanalyst deduces the plain text of each of the cipher texts or identifies the key used to encrypt the cipher text 2. Known-plaintext attack: In this type of attack, the cryptanalyst has a series of ciphertext and their corresponding plaintext values encrypted using a specific key. The cryptanalyst then tries to deduce the key by forming a relationship between the ciphertext and plaintext entries. 3. Chosen-plaintext attack: In this type of attack, the cryptanalyst not only has access to the ciphertext and associated plaintext for several messages, but he also chooses the plaintext that gets encrypted. His job is to deduce the key used to encrypt the messages or an algorithm to decrypt any new messages encrypted with the same key. 4. Frequency analysis: It is the study of thefrequency of lettersor groups of letters in aciphertext. The method is used as an aid to breakingclassical ciphers. Frequency analysis is based on the fact that, in any given stretch of written language, certain letters and combinations of letters occur with varying frequencies. 5. Rubber-hose cryptanalysis: The cryptanalyst threatens, tortures or blackmails the person who has the key until they give it up. Among the many cryptanalytic techniques, frequency analysis or frequency counting is the most basic technique applied to break substitution cipher based algorithms, among the varied list of attack techniques. The basic use of frequency analysis is to first count the frequency of ciphertext letters and then associate guessed plaintext letters with them. More complex use of statistics can be conceived, such as considering counts of pairs of letters digrams, trigrams, and so on. This is done to provide more information to the cryptanalyst. It exploits the weakness in the substitution cipher algorithm to encrypt similar plaintext letters to similar ciphertext letters. Frequency analysis based cryptanalysis techniques were used to break ciphers based on the traditional cryptographic algorithms, but they do not work well with the modern block cipher based cryptographic algorithms. Statistical properties of English: Frequency analysis based cryptanalysis uses the fact that natural language is not random in nature and single alphabetic based substitution does not hide the statistical properties of the natural language. In the case of encryption using monoalphabetic substitution, to start deciphering the encryption it is useful to get a frequency count of all the letters. The most frequent letter may represent the most common letter in English, E followed by T, A, O and I whereas the least frequent are Q, Z and X [7]. Statistical patterns in a language can be detected by tracing the redundancy of the text in the language. It has been realized that various universal regularities characterize text from different domains and languages. The best-known is Zipfs law on the distribution of word frequencies [5], according to which the frequency of terms in a collection decreases inversely to the rank of the terms. Zipfs law has been found to apply to collections of written documents in virtually all langu ages [5]. English language characters have a very high redundancy rate when used for cryptographic substitutions. If we have a message encrypted using the substitution cipher that needs to be cracked, we can use frequency analysis. In other words, if the sender has used an encryption scheme, that replaces one letter in the English to be another letter in English, we can still recognize the original plain text as, the frequency characteristics of the original plain text will be passed on the new cipher text characters [4]. To apply frequency analysis, we will need to know the frequency of every letter in the English alphabet, or the frequency characteristics of the language used by the sender to encrypt the text. Below is a list of average frequencies for letters in the English language. So, for example, the letter E accounts for 12.7% of all letters in English, whereas Z accounts for 0.1 %. All the frequencies are tabulated and plotted below:- For example, let us consider the following sentence: We study Cryptography as part of our course. Using a simple substitution cipher, let us consider the following: a->c , b-> d, c->e..w->y, x->z, y->a, z->b So, the cipher text becomes: yg uvwfa etarvqitcrja cu rctv qh qwt eqwtug. A simple frequency analysis of the cipher text can be carried out and the results are as given below: The above data can be used by a cryptanalyst to identify the key or the plaintext by using simple substitution to the cipher text till a suitable plaintext value is not identified. Apart from the use of mono alphabetic frequency analysis, cryptanalysts also identify frequency of paired letters better known as digram frequency and that of three letter words, called as Trigram frequencies. These help the cryptanalyst to exploit the redundant features of English language to break the cipher. The most common Digrams (in order): th, he, in, en, nt, re, er, an, ti, es, on, at, se, nd, or, ar, al, te, co, de, to, ra, et, ed, it, sa, em, ro. The most common Trigrams (in order): the, and, tha, ent, ing, ion, tio, for, nde, has, nce, edt, tis, oft, sth, men Table 1: Digram and Trigram Frequencies [6] These help in identifying the most commonly used terms in English to break a cipher. The digram frequencies are used to break two letter words such as an, to, of etc and the trigram frequencies are used to break three letter words such as the, are, for etc. After breaking a significant two letter and three letter words, it is practically east to identify the key from the cracked values of plaintext by matching the corresponding values in the ciphertext. This huge weakness in English language is used to break cipher texts encrypted using simple algorithms that make use of English alphabets. In practice the use of frequency analysis consists of first counting the frequency of ciphertext letters and then assigning guessed plaintext letters to them. Many letters will occur with roughly the same frequency, so a cipher with Xs may indeed map X onto R, but could also map X onto G or M. But some letters in every language using letters will occur more frequently; if there are more Xs in the c iphertext than anything else, its a good guess for English plaintext that X is a substitution for E. But T and A are also very common in English text, so X might be either of them also [4]. Thus the cryptanalyst may need to try several combinations of mappings between ciphertext and plaintext letters. Once the common single letter frequencies have been resolved, then paired patterns and other patterns are solved. Finally, when sufficient characters have been cracked, then the rest of the text can be cracked using simple substitution. Frequency analysis is extremely effective against the simpler substitution ciphers and will break astonishingly short cipher texts with ease. Attacks on Traditional algorithms Encrypting using traditional algorithms have been defenseless against cryptanalytic attacks as they use bit by bit encryption, which can be easily broken using frequency analysis based attacks. 1. Caesar Cipher: Considering the case of one of the oldest ciphers, the Caesar Cipher, this cipher replaces one letter of the plaintext with another to produce the ciphertext, and any particular letter in the plaintext will always, turn into the same letter in the cipher for all instance of the plaintext character. For instance, all Bs will turn into Fs. Frequency analysis is based on the fact that certain letters, and combinations of letters, appear with characteristic frequency in essentially all texts in a particular language [9]. For instance, in the English language, E is very common, while X is not. Likewise, ST, NG, TH, and QU are common combinations, while XT, NZ, and QJ are very uncommon, or even impossible to occur in English. This clearly shows how the Caesar cipher can be broken with ease by just identifying the frequency of each letter in the cipher text. A message encrypted using Caesar cipher is extremely insecure as an exhaustive cryptanalysis on the keys easily breaks the code. 2. Substitution Ciphers: The Caesar cipher forms a subset of the entire set of substitution ciphers. Here, the key of the encryption process is the permutation of all the twenty six characters of the English alphabets. Rather than choosing a particular key for all encryption process, we use a different key for successive encryption processes. This technique increases the number of possible key to 26!, which is about 4 X 1026, which eliminates the exhaustive cryptanalysis attack on the keyspace [7]. To decrypt the cipher the, statistical frequency distribution of single letter occurrence in English language is analyzed. Then, the digram and trigram frequencies of standard English words are compared with the frequencies of the trigrams in the cipher to finally reconstruct the key and in turn decipher the text. This is an efficient method to break the substitution cipher as, each plaintext letter is represented by the same ciphertext letter in the message. So, all properties of plaintext are carried on to the cipher text. 3. Vigenere Cipher: In a Vigenere cipher, there is greater security as, a given plaintext letter is not always represented by the same ciphertext letter. This is achieved by using a sequence of n different substitution ciphers to encrypt a message. This technique increases the possible number of keys from 26! to (26!)n. Although this was considered to be unbreakable, the Kasiskis method of attacking a Vigenere cipher yielded successful results of decrypting the message. According to this method, the first step is to find the key length (n). Find identical segments of plain text that get encrypted to the same ciphertext, when they are b positions apart, where b=0 mod n. According to Kasiski, the next step is to find all the identical segments of length greater than 3, and record the distance between them [7]. This can then be used to predict the length of the key (n). Once this is found the key is found by an exhaustive search of the keyspace for all possible combinations to identify the key. This is done by substituting all possible values for n to generate substrings. Once the substring is formed, the plaintext message can be automatically identified by using the back substitution of the key into the cipher [7]. This can be done for all possible values for n until finally arriving at the actual key, which reveals the plaintext that was encrypted. This method can take a long time to break the key to identify the plaintext incase the key length is very long, as the keyspace value would be large for larger keys. Defeating frequency based attacks: Frequency based attacks have been used for a long time to break traditional encryption algorithms. It uses the fact that, traditional encryption algorithms do not eliminate the statistical properties of the language upon encryption. The first way to defeat frequency based attacks is to encrypt blocks of characters at a time rather than single letters [7]. This would ensure that, the same text in the plaintext is not encrypted to the same text in the ciphertext upon encryption. For e.g., if we use the Caesar cipher encryption scheme, the word ADDITIONAL will be encrypted to CFFKVKQPCN, we can see that the alphabets A, D and I are repeated more than once and at each instance, the encryption scheme used always encrypts A to C, D to F and I to K. This can clearly be used during frequency analysis to analyze the redundancy of the characters and in turn map them back to get the original plaintext character. Using a block encryption scheme, one can be satisfied that, this phenomenon does not occur as, in a block encryption scheme, the whole plaintext is broken into chunks or blocks of data, that is fed in as input to the encryption algorithm. The algorithm then, reads the input block along with the key and encrypts th e complete block of plaintext, rather than individual characters, so there is a smaller chance that two blocks will produce the same chunk of ciphertext. The second way of defeating frequency analysis is to make use of synonyms of words [7], rather than repeating the same word over and over again in a sentence. There are a lot of words in English, which have more than one synonym, thus providing with a set of words to be used as convenient in the particular context. To help in the selection of a synonym, grammar checking would have to be used to ensure that, the meaning expressed in the sentence is not altered by changing the words. Attacks against this technique could include creating a list of the best synonyms, but this would not help the attacker as different word could be used at each instance the same meaning needs to be expressed, defeating the benefit of this technique. This technique of using alternate words to represent common words to defeat cryptanalysis attacks is called Homophones [7] in cryptography. A third technique that can effectively defeat cryptanalysis is Polyalphabetic substitution, that is, the use of several alphabets to encrypt the message [3], rather than using the same substitution technique again and again. The Vigenere Cipher is a form of Polyalphabetic cipher. This ensures that, no two characters are encrypted to the same ciphertext alphabet in the same message. This ensures that, direct frequency analysis of the cipher is not possible to successfully retrieve the original message. However, other techniques need to be used to identify the key length, if this is possible, then frequency analysis attack could be used to identify the original plaintext message successfully. Finally, a possible technique that could be used to defeat frequency analysis is to encrypt a single character of plaintext with two ciphertext characters [3]. Upon encountering the same character twice, then different characters should be used to encrypt the message. This can be achieved by using a key size double that of the plaintext message and then encrypting the same plaintext with two values in the key and save them together for the same plaintext character. This would ensure that no two plaintext characters will have the same ciphertext character, defeating the frequency analysis method of breaking the cipher. Modern encryption algorithms and cryptanalysis: Modern cryptographic algorithms take a better approach in defeating frequency analysis based attacks. The cryptographic algorithms nowadays use block encryption, rather than encrypting characters bit by bit, thus eliminating the redundancy of ciphertext alphabets for similar plaintext alphabets. Block ciphers are the central tool in the design of protocols for shared-key cryptography. A block cipher is a function E: {0, 1}k ÃÆ'- {0, 1}n à ¢Ãƒ ¢Ã¢â€š ¬Ã‚   {0, 1}n. This notation means that E takes two inputs, one being a k-bit string and the other an n-bit string, and returns an n-bit string [2]. The first input is the key, which is used to encrypt the secret message. The second string is called the plaintext, and the output is called a ciphertext. The key-length k and the block-length n are parameters associated to a specific block cipher. They vary from block cipher to block cipher, and depend on the design of the algorithm itself. Some of the most trusted symmetric ciphers inclu de AES, Triple-DES, Blowfish, CAST and IDEA. In public-key cryptography, the most commonly used cryptosystems are RSA and the Diffie-Hellman systems, which have not been found to have any vulnerabilities till date. Preferably, the block cipher E is a public specified algorithm. In typical usage, a random key K is chosen and kept secret between a pair of users. The function EK is used by the sender to encrypt the message, for a given key, before sending it to the intended receiver, who decrypts the message using the same key [2]. Security relies on the secrecy of the key. So, at first, one might think of the cryptanalysts goal as recovering the key K given some ciphertext, intercepted during transmission. The block cipher should be designed to make this task computationally difficult. In order to achieve this, the algorithms that are used to encrypt the message must be designed with a high degree of mathematical complexity, which cannot be reversed to obtain the plaintext from a known ciphertext. The length of the key used during encryption of a message plays an important role in deciding the effectiveness of an algorithm. Key length is conventionally measured in bits, and most of the well known strong ciphers have key lengths between 128 and 256 bits. A cipher is considered strong if, after years of attempts to find a weakness in the algorithm, there is no known effective cryptanalytic attack against it. This indicates that, the most efficient way of breaking an encrypted message without knowing the key used to encrypt it is to brute force it, i.e. trying all possible keys. The effort required to break an encrypted message is determined by the number of possible keys, known as thekeyspace. Knowing the speed of the computer to break the key, it is easy to calculate how long it would take to search the keyspace to break a particular cipher [2]. For example, considering a cipher that uses 128-bit keys, each bit can either be 0 or 1, so, there are 2128 or 3ÃÆ'-1038 keys approximately. Suppose we imagine that about ten billion computers are assigned the task of breaking the code, each capable of testing ten billion keys per second, then, the task of running through the entire keyspace would take around 3ÃÆ'-1018seconds, which is about 100 billion years. But, in fact, it would be necessary to run through only half the keyspace to hit upon the correct key, which would take around 50 billion years. This is longer than the estimated age of the universe according to modern cosmology, which is about 15 billion years [2]. This shows that, it is practically infeasible to crack modern cryptographic algorithms using Brute Force attacks. So, one can imagine the effectiveness of the modern cryptographic algorithms and their resistance towards cryptanalytic attacks. Conclusions: Cryptography has progressed in recent years and modern cryptographic algorithms have proved to be successful in defending against most forms of cryptanalytic attacks. Frequency analysis based attacks have proved to exploit the weaknesses in traditional encryption algorithms into revealing the plaintext message that was encrypted using them. The natural language used to encrypt messages is not considered to be random in nature, which is exploited by frequency counting based attacks. Based upon the frequency of letters that occur in the ciphertext, one can guess the plaintext characters due to their redundancy rate and the specific combination of letters in a word. This weakness can be repelled by using stream ciphers, which do not carry the redundancy in the plaintext to the ciphertext. Modern block cipher, encrypt a chunk of plaintext into ciphertext and vice versa, eliminating the redundancy of language used in encryption. Although the algorithm plays an important part, it is the key length used in block ciphers that helps in repelling cryptanalysis. Modern ciphers use a key length starting from 128 bits, eliminating the possibility of a brute force attack to decrypt the message. The higher the key length, the more time it takes to break these ciphers. These advantages have made modern cryptographic algorithms more popular among the security community. No known weaknesses have been found in these algorithms yet, that may allow one to identify the plaintext message. Bibliography: [1] Stallings, W., Cryptography and Network Security, Chapter 1, Third Edition, Prentice Hall, 2003 [2] Schneier, B., Applied Cryptography, Chapter 1, Second Edition, John Wiley Sons, New York City, New York, USA, 1996 [3] Hart, G.W., To Decode Short Cryptograms, Communications of the ACM 37(9), 1994, pp. 102-108 [4] Lee, K.W., Teh, C.E., Tan, Y.L., Decrypting English Text Using Enhanced Frequency Analysis, National Seminar on Science, Technology and Social Sciences (STSS 2006), Kuantan, Pahang, Malaysia [5] Zipf, GK., Human Behaviour and the Principle of Least Effort, 1949, Cambridge: Addison Wesley Publications. [6] Lewand, R.E., Cryptological Mathematics, The Mathematical Association of America, 2000, Pages 345-346 [7] Stamp, M and Low, R.M., Applied Cryptanalysis, 2007, Chapter 1 and 2, John Wiley Sons, New York City, New York, USA [8] http://www.simonsingh.net, Online internet frequency analysis tools [9] http://www.textalyser.net, online text analysis and frequency analysis information

Friday, January 17, 2020

Five of Frankenstein Essay

This line suggests that the creature was fairly friendly, rather than demonical: ‘†¦ while a grin wrinkled his cheeks’. The creature was very similar to a new born baby, with no experience in life and no knowledge of how to communicate and act: ‘His jaws opened, and he muttered some inarticulate sounds’. Despite the creature’s lack of knowledge and experience, Victor somehow manages to treat the creature in an appalling manner. These set of events portray Victor as somebody who is very cruel and selfish, furthermore, it makes the reader sympathise for the creature. The idea of bringing someone into the world by stitching together pieces of dead bodies and passing electricity through the corpse raises the question of immoralities; moreover the way Victor brought life into the world and then abandoned it is a terrible lot worse. In this story, Victor Frankenstein acted similarly to the Ancient Greek character ‘Prometheus’, as he played God. He felt that he had the right to create new life. He then realised how wrong it was, however by that time it was too late. In the 19th century, most people in England were very religious, so the way Victor plays God in the story would have been widely frowned upon. It was extremely uncommon for people to see boundaries and morals being questioned and stretched in this way. The creature that Victor created was far from a monster, it was but a helpless, needy being that he had abandoned, and it was not very different from abandoning a new born baby. Society cruelly rejected him due to his appearance, which goes to show how narrow minded society can be. This could have been part of the message that Mary Shelley wished to send out. The true monstrous figure in the story is Victor. This is all down to his cruel nature and the disgust he shows towards the creature he spent two years trying to bring to life. Victor even goes as far as describing the creature as a ‘half-distinguished light’, meaning he felt the creature was merely ‘half of a human being’. The author, Mary Shelley, in my opinion was trying to send out the messages: it is wrong to play God and that society can sometimes be very judgemental. The reasons are that Mary Shelley depicts the creature as helpless, confused and needy, so that the reader will sympathize towards him, whereas she depicts Victor as a cruel and selfish person, which supports the point of playing God being wrong. The reason I believe Mary Shelley tried to send out the message of society often being judgemental is so that people will realise that appearances are not everything, and that they can learn to avoid judging people by this before getting to know their past and present situation. 1,060 words Aran Atwal Show preview only The above preview is unformatted text This student written piece of work is one of many that can be found in our GCSE Mary Shelley section. Download this essay Print Save Here’s what a teacher thought of this essay 4 star(s).

Thursday, January 9, 2020

Adjustment Disorder Diagnosis and Treatment - Free Essay Example

Sample details Pages: 3 Words: 794 Downloads: 1 Date added: 2017/09/15 Category Advertising Essay Did you like this example? Adjustment Disorder Diagnosis and Treatment Adjustment disorder is a mental disorder that results from unhealthy responses to stressful or psychologically distressing events in life. This failure to adapt then leads to the development of emotional and behavioral symptoms. All age groups are affected by this disorder; and children have the same chance of developing the illness. While difficult to determine the causes of adjustment disorder, researchers suggest that genetics play a large part, as well as chemical changes in the brain, life experiences and mood. Some common stressor contributing to the disorder includes; the ending of a romantic relationship, loss of a job, career change, an accident, relocating to a new area or loss of a loved one. (Mayo Clinic, 2010) An adjustment disorder causes feelings of depression, anxiousness, crying spells, sadness, desperation, lack of enjoyment, and some have reported experiencing thoughts of suicide. Additionally, the illness causes one to be unable to go about their normal routine or work and visit with friends and family. Don’t waste time! Our writers will create an original "Adjustment Disorder Diagnosis and Treatment" essay for you Create order The lengths of symptoms vary from zero to six months (acute) and longer than six months (chronic). In the cases of acute adjustment disorder, symptoms can go away eventually; however, in chronic cases, symptoms begin to disrupt your life whereas, professional treatment is necessary to prevent the illness from worsening. Lastly, this disorder carries the possibility for abuse of alcohol and drugs, and eventually could result in violent behavior. According to a report issued by Tami Benton of WebMD, â€Å"the development of emotional or behavioral symptoms in response to an identifiable stressor(s) occurs within 3 months of the onset of the stressor(s). These symptoms or behaviors are clinically significant, as evidenced by marked distress in excess of what is expected from exposure to the stressor, or significant impairment in social or occupational (academic) functioning. The stress-related disturbance does not meet criteria for another specific axis I disorder and is not merely an exacerbation of a preexisting axis I or axis II disorder. The symptoms do not represent bereavement. Once the stressor (or its consequences) has terminated, the symptoms do not persist for more than an additional 6 months†. A determination is made as to whether the illness is acute or chronic. A differential diagnosis issued by Benton states that, â€Å"Adjustment Disorder’s (AD) are located on a continuum between normal stress reactions and specific psychiatric disorders. Symptoms are not likely a normal reaction if the symptoms are moderately severe or if daily social or occupational functioning is impaired. If a specific stressor is involved and/or the symptoms are not specific but are severe, alternate diagnoses (eg, posttraumatic stress disorder, conduct disorder, depressive disorders, anxiety disorders, depression or anxiety due to a general medical condition) are unlikely†. (Benton, 2009) â€Å"Clinical treatment modalities are difficult due to lack of clinical trials; as these treatments remain a decision influenced by a consensus†, reports Benton. Because AD originates from a psychological reaction to a stressor, the stressor must be identified and communicated by the patient. The non-adaptive response to the stressor may be diminished if the stress can be eliminated, reduced or accommodated. Therefore, treatment of ADs entails psychotherapeutic counseling aimed at reducing the stressor, improving coping ability with stressors that cannot be reduced or removed, and formatting an emotional state and support systems to enhance adaptation and coping. Further, the goal of psychotherapy should include; an analysis of the stressors that are affecting the patient, and determine whether they can be eliminated or minimized, clarification and interpretation of the meaning of the stressor for the patient, reframe the meaning of the stressor, illuminate the concerns and conflicts the patient experiences, identification of a means to reduce the stressor, maximize the patients coping skills, assist patients to gain perspective on the stressor, establish relationships, attend support groups, and manage themselves and the stressor. Psychotherapy, crisis intervention, family and group therapies, cognitive behavioral therapy, and interpersonal psychotherapy are effective for eliciting the expressions of affects, anxiety, helplessness, and hopelessness in relation to the identified stressor(s)†. (Benton, 2009) For patients with minor or major depressive disorders, who have not responded to psychotherapy and other interventions; trials of antidepressants are recommended. It is suggested that psychotherapy and pharmacotherapy are recommended for patients suffering with a combination of adjustment disorder and anxious moods. A report further states that, â€Å"treatments that are effective with other stress-related disorders may be constructive interventions for AD; and that treatment relies on the specificity of the diagnosis, the construct of stressor-related disorders, and whether the stressors are involved as etiological precipitants, concomitants, or some other unrelated factors†. Benton, 2009) References Benton, T. D. (2009). Emedicine from WebMD. Medscapes continually updated clinical reference. Retrieved from https://emedicine. medscape. com/article/292759-overview Mayoclinic. (2010). Mayoclinic. com. Retrieved from https://www. mayoclinic. com/health/adjustment-disorders/DS00584/DSECTION=symptoms

Wednesday, January 1, 2020

Solutions to Electronic Waste Essay - 1179 Words

Electronic and electrical equipment is essential part of busy world.It substitute hard human work and make it faster.The majority of mankind has computer at home or at work. In recent years changing of electronic equipment become faster due to obsolescence and fashion(Deathe et al. 2008, 322 ).The problem of e-waste influence on the future environment hangs over the modern society. E-waste ^ also known as electronic waste.It means electrical and electronic equipment which is not suitable for use and fill the damps. Electronic equipment, such as mobile phones, computers, and televisions consist of hazardous materials, which pollute the environment and impact on human’s health. By the National Safety Council lead’s 1.6 billion pounds and†¦show more content†¦2008, 324). Kahhat et al.(2008, 957) report that many countries already have experience in recycling, such as Japan, Taiwan, South Korea and the United States. These countries have different approaches fo r this solution. In South Korea consumers need to pay fee via buying new substituting equipment or pay it to government. Comparatively, in Japan prices of electronic equipment include fees.Recycling is one of the popular solutions, which have some specific features. From the environmental point of view, recycling is important, because it eliminates hazardous material’s influence.Cui, and Forssberg (2003, 243-263) show that if metals recycle, the majority of energy will be saved. For aluminum, the number of saving energy is 95%. In respect of accessibility for consumers, there are some disadvantages because of the cost. If people bought their equipment before the â€Å"PC Recycling Mark† was enacted, they should pay fees of approximately US$40(PC3R, 2008; Terazono et al., 2006; Yoshida et al., 2007). If their equipment has installed â€Å"PC Recycling Mark†, they will not pay fee, because it is included within the price of equipment. However, people can sell their old electronics to the recycling companies and retrieve some money. Evidence for complication of feasibility has been asserted by Liu et al.(2006) and Greenpeace(2005), which showed that recycling still may doing by hand and it injures the worker’sShow More Related Solutions to the Problem of Electronic Waste Essay1065 Words   |  5 Pageseconomy but also in socio-political and spiritual spheres of social life. However, this progress has brought with itself devastating disasters like electronics waste which cause serious health and environment pollution problems. E-waste- is a type of waste that contains obsolete or non-working electronic and electrical devices. Electronic waste can have a high level of danger because of contaminants like mercury, beryllium, cadmium, lead and brominated flame retardants discharging of whichRead MoreThe Problem Of E Waste1718 Words   |  7 Pagesregard, electronic waste commonly referred to as â€Å"e-waste† is one of the biggest challenges facing people all over the world for the production of the electrical and electronic equipment has been growing rapidly in the past decades. Therefore, the rise in demand for this equipment and the high obsolete rate has made e-waste of the fastest growing source of waste. For this reason, experts have come out to suggest various methods that can be used to contain this menace. In particular, the solutions includeRead More Solutions to E- waste problem Essay997 Words   |  4 Pagescreating modern devices. Consequently, in our disposable age most appara tus turn into aged ones in a couple of years or even months. This is one of the reasons why electronic devices become waste. E-waste is discarded, surplus, obsolete, or broken electronic devices or apparatuses. Most environment protection organizations maintain that e-waste induces health and pollution problems. The primary reason for this view is that almost all of the electric devices contain hazardous substances which are toxicRead More e-waste Essay1160 Words   |  5 PagesElectronic and electrical equipment is essential part of busy world. It substitutes hard human work and makes it faster. Most of people have a computer at home or at work. In recent years changing of electronic equipment becomes faster due to obsolescence and advance (Deathe et al. 2008, 322). The problem of e-waste threatens the future environment of the modern society. E-waste or electronic waste means electrical and electronic equipment, which is not suitable for use and fills the dumps. ElectronicRead Mo re E-waste Essay example1309 Words   |  6 Pagesthe sphere of high technologies production, so the scale of electronics market becomes wider and spins up from day to day. â€Å"According to the Consumer Electronics Association (CEA), consumers were expected to purchase 500 million units of consumer electronics in the US in 2008. US households spend about $1407 per year on hardware.† (Electronics Takeback coalition, 2010) Accordingly, there is a clear tendency of rapid substitution of electronic appliances observed, as every other day producers offer consumersRead More A Solution to E-Waste Essay1566 Words   |  7 Pagesproduction. Now the scale of electronics market becomes wider and spins up day by day with a cyclic launch of new electronic appliances with enhanced features. â€Å"According to the Consumer Electronics Association (CEA), consumers were expected to purchase 500 million units of consumer electronics in the US in 2008.† (Electronics Takeback coalitio n, 2010) Consequently, a clear tendency toward rapid substitution of electronic appliances can be observed. A high rate in electronics upgrading shortens theirRead MoreEvaluating The Performance Of E Waste Recycling Programs Using Fuzzy Multi Attribute Group Decision Making Model1522 Words   |  7 Pages Review of â€Å"Evaluating the Performance of E-Waste Recycling Programs using Fuzzy Multi-attribute group Decision Making Model† Haokun Li hli120@illinois.edu â€Æ' Contents Introduction 4 Performance Evaluation 5 (1) Define Criterions 5 (2) Score the Criterions 6 (3) Compute Weighting Factors 7 (4) Computing Weighted Matrix 8 (5) Find Positive and Negative Ideal Solutions 8 (6) Compute Overall Performance Index 8 Conclusions 9 References 10 â€Æ' Abstract: A fuzzy multi-attribute groupRead MoreOffice Solutions Goes Green1034 Words   |  5 Pagesto let go. When Office Solutions president and co-founder Bob Mairena decided it was time the company recycled its corrugated waste, there was little interest from recycling firms in the small amount the company had. If it wanted to recycle, Office Solutions would have to pay. But Mairena and his wife, co-founder and vice president Cindy Mairena, werent content with that. So they reached out to customers in 2007, offering to collect and recycle their corrugated waste, as well. The result – twoRead MoreOrganization Of Reduce E Waste973 Words   |  4 PagesOrganizational solutions paper An Liu SOC 360-001 Executive Summary As a founder of an environmental organization, my mission is to promote the construction, do a reality, a continuous development of society. Therefore, it is the deep concern of nature; to solve our environmental problems is a big part of the task. In particular, the global warming may be the most serious environmental problems we face today. Therefore, in our efforts to fight global warming as an organization, we made a commitmentRead MoreE-Structors Disassembles and Safely Recycles Electrical Devices681 Words   |  3 PagesSummary of E Waste The video is about e waste and how it’s handled in the USA. Julie Keough is the co-founder of E-Structors, a company that handles, recycles, sorts, separates and dissembles electronic devices safely in the USA. E-Structors is a certified recycling company to meet EPA standard of handling e waste and just to be sure that they hold that standard, every year an inspector comes and asks a few questions. Around 80% of all e waste gets thrown away in the trash, that’s around 15 million