Engineering & Technology (562)
Facebook Fires Bazooka
Arora, N. (2013). Facebook fires bazooka against Google, Apple, Microsoft and BlackBerry. Retrieved from http://www.forbes.com/sites/greatspeculations/2013/04/04/facebook-fires-bazooka-against-google-apple-microsoft-and-blackberry/
The article reviews the need for phones to be designed around consumers needs rather than existing applications. The article mentions facebook CEO Mark Zuckerberg statement that phones should be designed to suit consumers and not fit existing applications. With this in mind, the article mentions facebook new application, Home. Home application guarantees friendlier uses of the android phone users. The Home application regularly updates android phone users on any activities that could be ongoing in other applications that he owns. The introduction of Home apps introduces immense challenges to competitors such as Google and Apple, Blackberry and Microsoft who are yet to unveil similar user-friendly applications. Consumers interested in a user-friendly application such as Home are bound to migrate from these main competitors.
About the Author
The article is written by Nigam Arora an engineer and nuclear physicist by profession. Arora is also an active editor for the Arora report that regularly publishes newsletters aimed at encouraging corporations to keep up with changes. Arora is also a contributor of Forbes news letter that also focus on changes across leading corporations such as Google, apple, Facebook and Microsoft.
The article is quite informative and reveals the introduction of an application that will lead to stiff competition for other competitive technological corporation. The article indirectly informs the other corporations to become innovative and establish similar customer friendly applications. The article clearly indicates how each of the other corporations (Microsoft, Google, Blackberry and Apple) is disadvantaged by the presence of face book’s new application. The article also introduces interested customers to the new facebook application about to be rolled out on April 12th. Overall, the article is quite objective and informative.
Reflecting back 20 years ago, life used to be somehow analog, playing games, making friends and even education. However, my life has witnessed a massive technological advancement, such that, everything is now is being driven by technology. In the modern world technology has been taunted as the second most important thing since the invention of the wheel. It has simplified life and helped man solve a lot of problems. It has penetrated all aspects of human life from education, transport, entertainment and business. Technologically devices are available to everyone regardless of their economic or social status. What is amazing is how technology has changed the life of children. The modern child has access to phone, iPods, laptops and Smartphone at a very tender age. These technological devices have found several uses in the lives of children. Let explore what really technology has done to the life of a child, which initially was surrounded by story books, neighborhood friends and physical libraries. To accomplish these let’s look back years back. Education was about a teacher coming to class with a book and students writing notes, currently we have virtual teachers. Assignments were completed and submitted to the teacher, currently assignment are being submitted through turnit in. Physical libraries were the order of the day; today almost each learning institution has an e-library. These has essentially modernized the education sector and made education much fun but not without its challenges such as lack of face-face communication between the teacher and student, increased rates of plagiarism and so on.
Pre-technology life of a child was characterized with outdoor games that involved a lot of physical activities, which helped the physical, intellectual and social development of children. In the modern world, video gaming has become an integral part of children life. Children are spending a massive amount of time playing pc’s games on the internet and other sources. Concerns have emerged about the negative effects of these video games on children’ lives. Despite the invention of video games that involve physical activities such Wii sports, the overall effect of video games on the life of children is immense. Childhood obesity has significantly increased in the country, and this is blamed on inactivity. Children are spending an average of eight hours either playing video games or in social media. These has significantly reduced their activity and significantly increased chances of childhood obesity and other lifestyle diseases (Strasburger, & Donnerstein, 2010). The video games have also been associated with impacting the social life of a child. Unlike previous times where children used to interact with others during play time, nowadays children spend a considerable amount of time confined in their rooms playing video games. Proponent of technology use will argue that these children have hundreds, if not thousands, of friends online. However, these virtual friends cannot serve the same purpose as the physical friends. It is, therefore, paramount to have some external intervention to help children to utilize these amazing technologies without compromising their health and social life.
Currently Google and other web search engines have become the key consultation sources. Children no longer depend on their families, relatives or friends to ask questions concerning life. It is evident that not all information found in these web engines is factual. Therefore, children have been exposed to billions of information with no one to guide them on what is relevant and which is harmful. Despite the undisputed advantages of availing information to children, it is important to consider what kind of information children consume. Materials of explicit content have become readily available on the internet. Pornography and nudity are unrestrictively available. The impacts of exposing children to this kind of information have been well documented. According to the National Council for Prevention of Child Sexual Abuse and Exploitation, exposure of children to this information affects their language and sexuality. Explicit sexual material has been associated with earlier onset of sexual behaviors and increased rate of teenage pregnancy (Elliot, Hunter et al, 2004). It also influences the values of children, which may redefine social morality. However, it is not all evil on the internet.
Despite the shortcomings associated with the introduction of technology to children, one fact remains; it is impossible to separate technology with modern life. Technology will remain an integral part of life and especially so, the life of children. It is impossible to take away the video games, social media and the internet. It will take thousands of legislation to control and protect children from explicit material from the internet, but the society can intervene to help children exploit technology in a positive way. It is, therefore, my greatest aspirations to be part of individuals who have stepped forward to increase awareness among children on technology use. I am looking forward to helping that child to use his or her video game without compromising his or her health or social life, to help children access internet for their betterment, to protect the privacy of children on the internet and to build a wholesome child.
Collins, Elliott, N, Berry, S., Kanouse, D., Kunkel, D., Hunter, S., et al. “Watching sex on television predicts adolescent initiation of sexual activities. Pediatrics, 2004. Vol. 114 Issue 3, Pg 280-289.
Strasburger, V., Jordan, B. & Donnerstein, E. Health Effects of Media on Children and Adolescents. Pediatrics. 2010. Vol. 125, Pg 756-767
The Impact of Pornography on Children & Youth. National Council for Prevention Of Child Abuse And Exploitation. Retrieved from http://www.preventtogether.org
Conclusion On Metals And Alloys
In conclusion, metals and alloys differ in terms of their properties and usage. An alloy is a mixture of two or more metals. There are different alloys made from different metals including aluminum and magnesium alloys. The usage of metals and alloys depends on the properties. Metals have unique properties, but there are properties that are similar to all metals. The characteristics include conductivity, high melting points and high boiling points. The conductivity is depicted by the ability of metals to transfer electricity. Some metals do not transfer electricity. Metals also have high melting points because of the extensive metallic bonds within the structure of the metals. In addition, metals have high boiling points due to the complex bonds. This has led to different applications of metals including manufacturing of electrical appliances and electric wiring. Moreover, alloys also have different properties. The properties depend on the metallic elements used to form the alloy. Alloys have superior properties compared to metals. The properties include hardness, resistance and resistivity. Alloys are hard compared to metals because of combination of various elements when forming alloys and the big atoms. Alloys are also resistance to wear and corrosion.
A collection of journal articles were reviewed to determine the impact of elastic or physical properties on metal alloys. The theoretical review helps understand different metal alloys and their physical properties. From the analysis, it is evident that the alloys are significant in life as they have commercial significance. The commercial significance of alloys is evidenced in different ways. Producing alloys is cheaper than producing pure metals. This has led to alloys being the most popular metals in the world as companies used alloys to produce products. Companies utilize alloys to produce resistant, durable, strong products. Alloys can be tailored to produce different products. This has reduced the cost of producing products. Companies have modified the alloys to meet their needs when developing products. They have added other metals to the alloys to make them strong, less corrosive and durable. Additionally, alloys have helped in the conservation of the environment through cradle to cradle design. Alloys enable companies to use raw materials and materials that are not readily available to manufacture products. Companies also recycle the alloys to produce new metals to manufacture products and hence protect the environment (Smith & Hashemi, 2001).
Further, alloys have economic significance, unlike traditional steel. Manufacturing of alloys is more cost effective compared to steel, and this leads to more profits. The manufacturing of alloys has led to increase in jobs in different sectors including mining, engineering, refinery and designing. Employees in these fields perform different functions aimed at manufacturing the alloys. Alloys are useful in the aviation industry, and they will continue to be useful because of the high demand for durable products. Different metals and alloys are important to the aviation industry due to their distinct properties. The metals and alloys used in the aviation industry should have various properties including strength, ductility, malleability and elasticity. Other properties are brittleness, conductivity and density. The properties affect aviation products in different ways and manufacturers in the aviation industry should take the properties into account during manufacturing. Manufacturers should consider the density of the alloy when manufacturing planes as poor density measurements lead to plane crashes. Density is vital in aircraft manufacturing as it ensures the plane is balanced and has the right weight. The alloys used should be malleable without any defects, breaking and cracking. Malleability facilitates manufacturing of curved shapes. Another property is ductility which is crucial in making of wires and tubes. Ductile metals resist shock loads and thus vital in the aviation industry. Aluminum alloys are used in the manufacturing of cowl rings, ribs, fuselage and bulkheads.
Manufacturers in the aviation industry also consider the elasticity of metal or alloy. The metal or alloy should not be permanently distorted when force is applied. This is essential in manufacturing of crafts as the parts are manufactured well to ensure they do not stress beyond the elastic limit in cases of maximum load. Lastly, the metals and alloys used in aircraft manufacturing should be able to conduct electricity to regulate the amount of heat needed for welding. Conductivity is key in bonding as it aids in the elimination of radio interferences. The most commonly used metals and alloys in the aviation industry are aluminum, aluminum alloys, magnesium and magnesium alloys. Aluminum has a high strength to weight ratio, and this makes it fit for aircraft construction. Magnesium is highly used in aircraft manufacturing as it is available and owing to its weight (Smith & Hashemi, 2001).
In order to prevent aircraft accidents and losses when manufacturing aircrafts, air craft manufacturers should be aware of the different properties of metals and alloys. They should understand how each property affects the product manufactured and how they can over the weaknesses or limitations resulting from the properties. A large number of accidents in the aviation industry have resulted from poor balance and weight issues. Manufacturers have used the wrong metals and alloys when manufacturing different parts of the plane, and this has affected its balance and overall weight. Therefore, manufacturers should use metals and alloys that have properties that enhance aviation safety such as aluminum and magnesium and their alloys. The metals and alloys used should be durable, resistant, ductile, and malleable to reduce the manufacturing, repair and maintenance costs. Manufacturers should be familiar with the properties as different standards require different properties (Smith & Hashemi, 2001).
The demand for metals and alloys in the manufacturing of different products in the aviation industry and other industries will rise. This is because of the benefits associated with metals and alloys. Manufacturers in different industries have realized the benefits of metals and alloys and continue to invest in their production. Manufacturers prefer to combine different metals when manufacturing products to get the desirable effect and ensure they offer quality, durable and safe products. In addition, the need to use environmental friendly materials when manufacturing aircrafts and other products will lead to more use of alloys. This is because alloys are safe and can be recycled and thus conserve the environment. The usage of alloys in the aviation industry will improve due to technology development. The development in technology has improved alloy production and will continue to improve the production of alloy. This is because manufacturers use different technologies to extract metals from alloy and combine metals to form alloys. They also use different technologies to mine metals and refine alloys. Hence, alloys and metals will become common in the near future and improve the quality and durability of products (Smith & Hashemi, 2001).
Smith, W. F., & Hashemi, J. (2001). Foundations of Material Science and Engineering (4th ed.). McGraw-Hill, p. 394
The article analyzes the introduction of Pivotal, the new competitor of Amazon web services. Pivotal web services came to be due to financial support of EMC and VMware, two multibillion technology companies. EMC and VMware have provided Pivotal with skilled professionals who have moved to the new web service provider in an effort to provider customers with quality services. The main focus of Pivotal will be to provide customers with a platform to build new applications and leverage data using cloud technology (Hardy, 2013). However, Amazon has an advantage over newcomers as it runs the largest public cloud and allows customers to rent the technology rather than purchasing. Amazon has also established a selection of software applications that are already in the market. Pivotal has the task of proving that they can provide better, faster and cheaper services than Amazon for it to be competitive.
Background Information on Author
The author of the above article is Quentin Hardy, a deputy technology editor writing for the New York Times. Quentin writes mainly about diverse technologies in the market today. Quentin also works as the national editor at Forbes where he writes about technology and their interaction with people and the business environment. Quentin is also a visiting lecturer at UCB where he focuses on communication technologies. Quentin has vast experience in technology and it has been his specialty.
Quentin’s article is quite informative as it reveals to reader the presence of a new web service provider, Pivotal. The article manages to provide readers with underlying information on the new company as well as look at the challenges its faces with regard to attracting users. The author also analyzes Amazon, an already webs service company that is bound to be affected by Pivotal entry into the market. Overall, the article takes an objective perspective and leaves readers to make a choice.
Hardy, Q. (2013). EMC’s Amazon challenger comes out. Retrieved fromhttp://bits.blogs.nytimes.com/2013/04/01/emcs-amazon-challenger-comes-out/
The Challenging Problems In Multicore-Processors
The continuous strive to attain higher performance without pushing up thermal effects and power consumption has enabled researchers to look for other alternative architectures of microprocessors. Multicore processors have now dominated the computer market. No other alternative ways have been suggested to increase the micro processing performance in the near future. This is because the Multi-core chips do not perform in a much higher speed than the single-core models, but they generally help to improve the overall performance by working in an efficient manner and at the same time handling a workload of activities in a parallel manner. This paper is on the challenging problems of multicore processors. Three of these challenges will be analyzed, which are finding reproducible and reliable testing and debugging implementing multi-threading and multicore-processors and communicating a design with multiprocessing components.
It is a thing of the past to follow the trend of boosting the speed of processors so as to improve performance. The new direction, which manufacturers have adopted, and are focusing on is the multi-core processors. Using a single chip of multi-core processors has the advantage in raw power processing, but this advantage is not for free. This is because designers have to come up with processors that reduce power consumption and system costs and at the same time expected to add functionality and increased performance to their products. This is a challenging trade –off to address (Rijpkema, 2003). One of the approaches that have been suggested previously is ramping up a processor’s clock speed, but this often leads to more power consumption. There is also the Memory performance, but it failed to keep pace with the processor technology. These mismatches of the earlier selected approaches do limit any vital gains on the performance of the computer system.
Another option is the multi-core system, but it suffers from the high cost and a large die area. Performance increase can be attained from a fairly substantial cost in system power consumption and silicon. Another option is the multi-issue processors that have two or more execution units, but struggle in utilizing hardware resources and also suffer from a large area penalty. This means that software needs to be constantly revised so as to make successful use of the multiple networks (Wassal, & Hasan, 2001).
Implementing multiprocessing and multi-threading
Multi-threading is an application that has a large number of threads in a process. Multi-processing is an application that is organized within a multiple O-S level process. A team means a chain of instructions in a specific process. Every thread has its specific instruction pointer comprising of a set of stack memory and registers. Process specific is the virtual address space, and is found in all threads in a given process. Multi-threading is a light-weight type of concurrency with less context in every thread more than in terms of process. Due to this there are low synchronized costs, context switching, and low synchronization cost. The shared address pace indicates that data sharing does not require any other extra work. Multiprocessing has a benefit opposite to this resulting from the way processes are insulated by the OS from each other, in that an error in a single process cannot tamper with other processes. The Multi-threading on the contrary works in a different way in which an error in a thread can also impact on other threads within its process. Individual processes may have different permissions and function as different users (Kleiman, et. al 1995).
The support of multiple software threads within a single core processor offer the advantages of traditional approaches without any disadvantages associated to them. As the multi-threading is becoming common in desktop and server markets, the embedded market has not maximum optimized it. One of the main challenge with the traditional t processors with a single thread is that the execution pipelines will have many stall reasons such as branch-mispredicts, cache misses, and different other interlocking events. The main reason for achieving maximum performance from the core processors is on the way it executes threads in a pipeline (Schmidt, and Huston, 2001).
This problem can be resolved without establishing a whole new architecture by the use of MIPS3234K core that is based on the new Virtual Processing approach and supported by specific application extensions on MIPS instruction set architecture. This new approach gives an efficient multi-threaded utilization of the nine main stages of the 34K execution pipeline supported by an amount of hardware for handling the quality of service, thread context and virtual processors prioritization (Worm, Ienne, Thiran, & De Micheli, 2002).
UNIX is an example of a multi-processing system that is commonly used. Others are the OS/2 of high end personal computers. Multiple processing systems are usually complex compared to the single process systems. Multithreading is recommended when rendering complex scenes and medium local machine. The physical processors available do determine the number of threads available. Machines, which are hyper-threaded indicate two processes even when there is one installed CPU TO the mother board. However, using two processes do not guarantee an improvement in performance or doubling performance. There is basically a major difference between multithreading and multiprocessing. Multiprocessing is established in the renderdl executable, and multithreading is established to the 3Delight library. This does not make a difference to users who provide RIBs by use of renderdl command. For the 3Delight library users, they can only link it by using multi threading (Wu, 2002).
Multi-processing and multi-threading have close relationship. The multiprocessors do share connectivity or memory only while multi-threaded processors do share the two as well as issue logic and fetch instruction and also other resources of the processors. For a single multi-threaded processor, there are different threads that compete for processor resources and the issue slots and this limits parallelism. Multithreaded programming has the JVM controls witching unlike Multiprocessing with OS controls switching the threads of multithreading utilizes the same language such as java while multiprocessing can make use of varying processes utilizes different languages. Every process of multiprocessing has its specific JVM while, in Multi-threading, all threads in a single process use a single JVM (Schmidt, & Huston, 2001).
Finding reliable and reproducible debugging and testing
Testing and debugging often go together because testing deals with finding errors while debugging is all about identifying of errors and repairing these errors. Debugging and testing cycle start from test to debug to a repeat of the cycle. Debugging is supported by the application of reliable tests especially regression tests. This will help to reduce the occurrences and introduction of new bugs during debugging. Debugging and testing have to be conducted by the same people at each and every time.
Debugging can difficult when there is no direct relationship between the internal causes of an error and its external manifestations. Other challenges arise when the cause and symptom arise in the remote locations of a program. The third reason is when changes such as bug fixes and new features in a program occur that may modify or mask bugs. The fourth is that symptoms can be as a result of misunderstanding and human mistakes that make it difficult to trace. Bugs can be as a result of external causes and by difficult or rare to reproduce program timing or input sequence. Last, bugs can be as a result of other installed systems or software installed to the system.
Designing for Test/ Debug
It is important to give thoughts on bon how to debug or test a system when writing down the code and translate it into bugs. It is also essential to debugging from the star point when designing debugging and testing. Test has to be done often and early. It is also important to assert a point which is true because it creates functions to help in checking data (Ye, Benini, De Micheli 2002).
There are properties that are essential for testing data. First is to know the output expected in any good program for every test input. Test inputs have to include the inputs that frequently cause errors such as fully exercising code and boundary values. For the case of fully exercising code, it is recommended that one has to ensure that every line of a code can at least be executed once. Also for the case of often skipping a code, one has to provide at least one test input that skips this code. It is also essential to test bugs that rarely occur (Youssef, El-Derini and. Aly 1999).
In debugging and testing, it is essential to use assert statement such as turn on and off statement, or statement used to express the conditions intended by the programmer to be valid at a specific point. It is also essential to assert statements that are mainly useful in asserting post-conditions and not preconditions (Theis, 2000).
Fixing and finding bugs
For high quality software to be created, it is essential to find bugs which are always those that are reproducible. The hardest bug is those which are difficult to reproduce. There are different types of bugs, which one can choose from. The compile time are often caught with compiler. They include spelling, static, and syntax type of mismatch. The design type is for incorrect output to deal with flawed algorithm. There are also the off-normal conditions to tackle failure in a section of the software. There are also the interface errors between programs, threads, module meant for runtime exception. The ideal process for debugging involves. Then the identification of a test case that clearly shows the point where fault has occurred. This is followed by isolating the challenge into small fragments. The fault behaviors are correlated with program code/logic error. The program is changed and also calls for the need to check other sections of the program where there could similar program logic. The error is then verified using regression test that is completely removed being careful not to insert new errors. When appropriate, update documentation (Walrand, 2000).
Communicating a design that has multiprocessing components
The interrelationship of tightly coupled multiprocessing, loosely coupled multiprocessing, multi database function and inter-processor communications is basically complex. Comprehending this relationship is essential for one to install a z/TPF system. An abstraction relationship generally means difficult, obscure or unrealistic. The abstraction is meant to hide most details. When threads have synchronized access to shared locations for storage, by use of synchronized routines, the effect of running a program on common memory multiprocessors becomes the same as the use of a program in a uniprocessor. The programmer, however, in many situations may be forced to take advantage thatmultiprocessors has by use of certain techniques to escape the synchronized routine. These are high dangerous routines. The major components of a multiprocessor are (Singh, Weber, & Gupta 1992):
• The store buffers that link processors to their caches
• The processors
• Caches that hold the contents of recently modified or accessed storage locations
• Memory shared by all processors and is the primary storage
Based on the traditional model, the multiprocessors seem as though the processors have a direct connection with the memory. This is in the sense that as bone processors stores to a location and is immediately loaded from, that specific location, and the second processor is capable of storing data loaded by the first processor. The caches can be used in speeding access to memory and the desired meanings can be attained when there are consistent caches. This simplistic approach is problematic, because it is necessary to delay the processor so as to ensure that the desired meaning is attained. Modern multiprocessors utilize different techniques as a means of preventing delays. However, the negative aspect about them is that they change the memory model semantics (Sgroi, & Sangiovanni-Vincentelli 2001)
In controlling I/O devices, it is essential that I/O operations and memory be carried out in a programmed order. The Processors have to be made in a way that they will not buffer with the writes of I/O thus presenting strict ordering of the operations of I/O that are enforced by the processors (Shiue, & Chakrabarti, 1999). Therefore, to optimize the memory performance, chipsets and processors have to implement write backcaches and write buffers. The compatible processors have to provide order in the inner cache and write of buffer accesses. The chipset should also guarantee processor ordering in all access to external memory. When using a heterogeneous system, it is required to implement a communication scheme or other two OS with similar infrastructure for inter-processor communications. This will help to minimize resource conflicts and the two OS will help in providing standardized mechanism to access shared hardware components.
Kleiman S, Devang, and Smaalders D (1995)Programming with Threads Prentice-Hall.
Worm, P, Thiran, G. & De Micheli, (2002)An Adaptive Low Scheme for On-chip Networks, pp. 92-100.
Wu, J (2002)A deterministic fault-tolerant and deadlock-free routing protocol. Proceedings of the 16th international, pp. 67-76.
Ye, L, De Micheli (2002) Analysis of power consumption in network routers. Pp. 524-529.
Youssef, M. N. El-Derini and H. Aly (1999), Structure and Performance Evaluation of a Replicated Banyan Network Based ATM Switch pp. 258-266
Theis, T (2000)The future of Interconnection Technology. IBM Journal of Research and Development, Vol. 44. 379-390.
Walrand, P (2000) High-Performance Communication Networks,
Wassal, M. A. Hasan, (2001)Low-power system-level design of VLSI packet switching fabrics, IEEE Transactions on CAD of Integrated Circuits pp. 723-738.
E. Rijpkema, (2003)Trade oﬀs in the design of a router with both guaranteed and best-eﬀort services for networks on chip. Pp. 350-355.
Shiue, G and Chakrabarti, (1999)Memory exploration for low power, embedded systems, Proceedings, 1999, pp. 140-145.
Singh, W. Weber, Gupta (1992) Stanford Parallel Applications for Shared-Memory”, Computer Architecture News, vol. 20, pp.20(1):5-44.
Sgroi, M and Sangiovanni-Vincentelli (2001)Addressing System-on-a-Chip Interconnect Woes Through Communication-Based Design Conference, pp. 667-672.
Searching the Web
Searching information on the Web is multifaceted. Different searching roots are likely to give a searcher the same information. Web searching is influenced by many factors. It requires skills in accessing, sorting and use of information from online sources. Students and researchers can use the web to access millions of Web pages. Web pages contain information on various and diverse topics. Web searching can be done through search engines or directories. There are fundamentally two methods used to find information on the web (browsing and searching). Browsing involves following a hypertext trail of links created by other users. Searching relies on powerful software that seeks and match the key word specified by a searcher with the most relevant document on the web. Search Strategy (Outline level 1)
There are a variety of strategies that can be used to help writers scale down the amount of information found in the web. Some of the most useful search strategies include the use of phrases. Phrase searching involves typing several keywords in a keyword search. Boolean operators, commonly known as AND, OR and Not, are efficient in clarifying what a research want. It is crucial for students to use multiple search sites. Different search sites may give different information. Some time basic searches may not yield the required information. In such cases, field searching is appropriate. Field searching is limited to a particular search characteristic such as page title, URL, page text or website (Morley & S. Parker, 2010, pg-339-341).
Web search tools include four broad categories of sites: search engines, directories, Meta search engines, and other web resources such as web bibliographies.
A search engine is a search service which automatically indexes Web pages and, to some extent, other file types. They are computer programs which read Web pages on the internet and store this information in a database. When you make a search, the search engine searches its database for relevant material which it then presents to you. Google is the most used search engine in the world.
The purpose of using Web standard is to ensure that web information is accessible to all. The standards were developed and maintained by the World Wide Web consortium. They ensure that pages posted on websites are easier for search engine searches and can easily be converted to other formats.
Using a Meta-search engine
A Meta search engine is a tool that uses multiple search engines. Meta search engine allows one to search several search engines simultaneously. Most search engine advance queries to a number of chief search engines and directories, including Google, Lycos, MSN search, Teoma and others. After the query is sent to the several search engines, the search engines compare the search phrase against their databases of Web page data and returns results to the results page of the meta-search engine for a searcher to view. Some Meta search engines identify the search engine they retrieve the links from while others do not. The Meta search services often do searches in both search engines and directories.
A few search engines use natural language querying. A natural language query language allows users to enter a question exactly as they would ask the question to a person. The search engine utilizes the knowledge it has on grammatical structure of the question and then it uses the knowledge to convert the natural language into a search query. This process is known as parsing. When is it suitable to use a Meta search service?
It is useful in the beginning of a search in an unfamiliar subject to get an idea of relevant search words and important resources. It is effective when you quickly want to get an overview of central Web pages in a subject. It is also essential in searches with clearly delimited and exact queries. Meta search is crucial when one want fast result on a simple or popular search using one or two search words. If you want to compare the query results between different searches services quickly.
Meta search services are advantageous in several ways. They are useful when a searcher want a limited number of relevant hits. It is appropriate for obscure objects. It is suitable for testing when you don’t find what you’re looking. It is good for getting a survey of the things that exist on the Web in your subject. Some of the Meta services include clusty, dogpile, Jux2, lxquick, Mamma, MateCrawer and Surwax (Fransson, 2009; pg-17).
Subject directories are excellent tools for search-oriented searches or when a searcher wants to find sites recommended by experts. A subject directory is a collection of links to a large number of internet resources. Typically the links are organized by topic areas. The two type of subject directories include commercial directories and, academic and professional directories. Commercial directories are general in nature and are less selective in the links they provide. Academic and professional directories are usually maintained by experts and they provide credible information. They are excellent research tools for highly specialized information. Some of the most popular academic and professional directories include INFOMINE, The Internet Public Library, and Librarians’ Index to the internet.
The invisible web consists of material that general purpose search engines either cannot or will not include in their collection of web pages. They contain a vast amount of authoritative and current information accessible to the user. Invisible web are efficient in retrieving information stored in formats unavailable to many directories and search engines. They can retrieve PDfs, flash, shockwaves, executable programs and compressed files. Invisible web also contains real time content and gives content that is dynamically generated. It contains information contained in pages consisting of images, audio or video (Sherman & Price, 2007; pg-50-52). Invisible web is essential when one is interested in a very specialized content/topic. Specialized search gives a high level of control over search engine input and output. It gives a level of precision and recall that is not available with search engines. Invisible web has a high level of authority and material found is dependable.
Each day hundreds of internet users communicate using different forums such as blogs where they share ideas and concepts. There are a number of educational and scientific internet discussion groups. These groups provide information with a valid point of view. Use of information from these groups requires restrains, filtering and censoring. It is common to find biased information from these groups. It is important to assess the credibility of the information posted in such groups. Always be on the lookout for the presence of moderators, type and qualification of members and ethical standards observed (Mukund, 2010; pg 1).
Evaluating the Information
Information obtained from the internet should be evaluated to determine whether it is appropriate, credible, and current. A thorough analysis of the Website containing the information should be analyzed. It is important for the researcher to find out the funders of a website and their motive. It is crucial to note the ending of the URL of the website. The ending will inform whether the site is educational (.edu), commercial (.com), governmental (.gov) and organizational (.org). It is fundamental for one to scrutinize the author of the content to identify his/her credibility. The information should be up-to-date to be relevant. It is essential to evaluate the quality of the information by comparing it with other sources and checking whether facts are supported with relevant reference. Select information that is objective and directed to a certain point of view that is of interest (Amy, Gwenn & Lori, pg 208-211).
Principles of Ethical and Legal Use
The use of internet and other technologies for research purposes is associated with several ethical and legal issues. Most of these issues involves the ethical principle of autonomy, data privacy, confidentiality, integrity of data, property intellectual, professional standards and normalefience (Brown & Robert, ).
Intellectual remains one of the key issues not only in web searching but also in the contemporary business world. It is important to acknowledge the author of the source used and to avoid the use of direct words without citing. Use of Intellectual property is under the control of national and international laws.
Plagiarism is intellectual theft and students should strive to avoid it. Most students find challenges in quoting information from online sources. The finest way to evade plagiarism is by giving credit to the author of the source of information. Giving credit can be achieved through in-text citation and referencing. Different international formats such as MLA, APA, Chicago and Harvard have different guidelines on citing online sources. Students should also have good skills in paraphrasing.
Keeping Track of and Citing Resources
It is indispensable for students to maintain track of online sources and cite sources to avoid plagiarism. Keeping track of resources helps students to avoid missing some sources used in the document. As one finds resources on the internet, it is important to keep track of them. Currently there are some softwares that help students to keep track of these sources. They include EndnoteTm , which provides a mechanism for accurately generating internal references and bibliography for popular writing style.(McVay,pg-73)
It is important to cite the sources of your information. Citations give the reader of your work to evaluate the sources you consulted. They also indicate the depth and scope of the information you provide. Citation is a standard practice in academic work. Proper citation methods help one to avoid plagiarism.
MLA format is commonly used in liberal arts and humanities. While citing an entire website, give the name of the website, title of the source and date of access.
The national center for disease control. www.ncdc .gov. overweight management. 4/4/2013.
While citing individual resources, start with the name of the author, title of the source, last date edited and the date of access
Garders. The national education. 2013. 4/4/2013.
APA citation provides that one cite the source by indicating the name of the contributor (last date the data was updated). Title of the source. Retrieved from http:// web address of the source.
J. Baggert (2013). Evils of internet. Retrieved from http://www.centraling. Com
Searching web is one of the most widely used sources of information. Effective web searching requires skills in differentiating the million of data available to relevant sources. There are four basic search tools that include search engines, Meta search and directories. Subject directories are specific than search engines while Meta search incorporate information from different database. The invisible web provides information that is not easily provided by search engines. Discussion group provides forums through which experts discuss issues and share information. The credibility of information obtained from such groups should be thorough evaluated to ensure it meets the intended quality. Information obtained from the web should be evaluated before being used for scholarly reasons. Evaluation should be based on the content of the resource, the author of the information, sponsorship of the website and other factors. It is essential to keep track of sources used to assist in referencing. A lot of effort should be dedicated to ensuring that information is not plagiarized. This can be achieved by giving correct citation and acknowledging of the author of the source. Different international writing styles have different guidelines.
Jonas Fransson. Efficient Information searching on the Web. Retrieved from www.jonasfransson.com (2009).
M. McVay. Learning online: A guide to success in the virtual classroom. Routledge (2004, pg-71)
D. Morley & C. parker. Understanding computers, 12th edition. Cengage learning. 2010; pg-339-341
Sherman & Price. The invisible web: Thomas H. Hogan. 2007; pg 50-51
J. Mukund. Internet based discussion group. Retrieved from http://www.ncbi.nlm.nih.gov on 4/4/2013
Evidence shows that even the clean sources of power has drawbacks. The world during the early times of Nuclear Power had a different thought on this type of fuel alternative source of energy. Humans associated the theory of nuclear power with the concept of war. Since then, different scientists and theorists had developed findings and research that shows that the use of nuclear power is a good idea because it is a substitute for the current energy source. Most people believe on the theory that history will always repeat itself. There are instances when we end up doing things that we have earlier done. It is clear that every energy source human’s use has benefits and drawbacks at the same time. In respect to this, engineers and scientists have developed strategies on how to ensure and stabilize the level of power source of nuclear energy produced, (Penney & Selden, 2011).
Enrico Fermi one of the Nobel Prize winners is a man who has been associated with the safety history of the nuclear reactor. Fermi has been recognized and identified because of the good work he has contributed towards the field of nuclear power generation in the society. he was born on September 29, 1901 in Rome, Italy. A lot of controversies have been developed on the use of nuclear energy. This type of electricity source has many disadvantages and advantages as compared to other forms of energy sources. Evidence shows that this kind of energy alternative can be used in discretion and moderation, (Penney & Selden, 2011).
On April 25th, 1986, Chernobyl Nuclear Power Plant near the town of Pripyat, Ukraine developed a nuclear reactor. The reactor was scheduled to maintenance shutdown and scheduled to undergo a controlled shutdown for the purpose of testing the ability of generator to produce electricity for the plant’s safety systems. During the time, the power plant was using dual diesel generators which were estimated to power up in duration of 40 seconds whenever electrical input was needed. Scientists realized that by connecting the reactor to diesel generators, the reactor would produce energy that would be used to start up the diesel generators, cut down the 40 seconds power-up timeframe, and later allow for the generators to spin with the help of their momentum. The safety experiment that comprised of electrical conditions and flawed chemical through the help of inexperienced crew of scientists allowed the eruption of nuclear disaster, (Penney & Selden, 2011).
Although the scientists and engineers had followed the safety guidelines for the experiment, they were unaware of the hazardous conditions that were brewing in the reactor. Initially the reactor worked in a normal stat, but it reached a point when the energy started increasing. The reactor drastically increased and gained energy in electrical potential far beyond what was believed to be the allowable maximums. Inside the reactor were the immensely high temperatures that caused steam to blow off the reactor’s top part. This allowed the oxygen to enter which later reacted with the graphite moderator. The reaction made the reactor create an intense graphite fire and explosion that spread radiation and radioactive particles, (Penney & Selden, 2011).
The Fukushima Daiichi nuclear disaster is referred to as the Great east Japan Earthquake which was about 9.0 in magnitude. It occurred March 11th, 2011 at around 2.46 pm and claimed lives of people. The nuclear disaster was severe and had duration of about 3 minutes, which caused much, damage and destroyed a lot of human activities. Conclusion from the two incidents is clear after gathering and analyzing the information in accordance. The two situations are considered nuclear power aspects that have something in common worth learning and developing a conclusion. The Chernobyl Nuclear Power Plant made many workers became sick with others dying due to the fear of getting cancer in the future, (Penney & Selden, 2011).
Evidence shows that rectors from the nuclear develop some waste disposal challenges and difficulties. After the disposal, the waste may produce a large amount of radiation. In order to deal with the above condition, users create cooling pool or a special place near the reactor. It is clear that there are disadvantages that are associated with the use of nuclear energy compared to other sources. Irrespective of this, there are some advantages of nuclear power. One of the advantages is that nuclear power is inexpensive compared to power from the oil fuel. Evidence shows that the world has fewer oil resources something that means in the near future oil resources will be exhausted. The supply of nuclear energy acts as an energy alternative that could supply electricity when other resources are unavailable, (Penney & Selden, 2011).
Penney, M. & Selden, M. (2011). The Severity of the Fukushima Daiichi Nuclear Disaster: Comparing Chernobyl and Fukushima. Retrieved from, http://www.globalresearch.ca/the-severity-of-the-fukushima-daiichi-nuclear-disaster-comparing-chernobyl-and-fukushima/24949, On April 2, 2013
Reciprocity is defined as a mutual exchange of privilege. This refers to responding to a good deed with another good deed which is a rewarding kind action. In response to the friendly action, people are usually nicer and more cooperative that it is predicted by the self interest model while, in response to a hostile action, they are nastier and also brutal (Wadhwa, 2007). Reciprocity is normally considered as being a strong determining factor in human behavior. People usually categorize and action by viewing the consequences of the action and the fundamental intentions of the person. Reciprocity is much centered on trading of favors than making a contract or negotiations with another person. This is the kind of trust which must exist towards a person who one is about to request a certain service or favor.
If the needed degree of trust does not exist, the particular favor can be requested through a third person who has trust with both of the parties. This result to the creation of reciprocity networks that are based on kinship because trust usually exists between close kin, but it can extend to include more people among acquaintances and friends. A network is usually said to possess reciprocity when its response and excitation terminals can be interchanged (Wadhwa, 2007). A network which satisfies a condition is referred to as a reciprocal network. Reciprocity is usually considered as stability and balance in social structure. However, it does not imply cohesion in the sense of connectivity.
US form reciprocity network
People in the United States usually follow reciprocity network in different ways. Reciprocity network as practiced by people in United States is seen by the tendency of individuals towards forming mutual connection with other people through returning of similar acts like making phone calls and sending emails. This is a common means of communications that are used by people including other means of online chatting and sending of messages like the use of twitter and facebook (Glanz, et al2008). This is vastly enhanced by technology where people in twitter and facebook use this form of networking in communicating with others. In the reciprocal relationship, the two parties usually share an equal interest in maintaining the relationship where in that relationship that lacks reciprocity, an individual appears to be more active than the other one (David, 2011).
For a reciprocal network to exist, there need to be equal action from the parties. This helps in creating a good network of communication. Reciprocal network in United States is likely to persist in the future, and behaviors that are related with reciprocity provide a good feature for classification and ranking which is based on the methods for trust prediction (Bakshi, & Bakshi, 2009). Reciprocity networks usually play significant roles in communication and social networks. Reciprocity networks in United States helps in supporting propagation process. For example, when spreading certain information and ideas through the use of emails, the presence of the mutual links assist in speeding up the propagation. Lack of reciprocal networks can reveal unwanted emails, and calls in spam detection.
United state people have formed several reciprocal networks through innovative ways. This is a social network that is mostly used by young people, researchers, celebrities, and many other people. People can make friends in facebook and then they will respond by friend them back. A person might follow another on twitter and the other person will follow them back. This is a common example of reciprocity network that is used by people in United States. For a person to be able to maximize the potential and power of reciprocity of social media, people need to be catalyzer of reciprocity (Wasserman, 1994). In the use of facebook and twitter, people usually act reciprocally and also create opportunities for reciprocity in the social networking setting.
An example of a reciprocity network from my own experience is that I have made several friends through social networking. The networks that I use include my email, twitter and facebook accounts. In order to do this, I have to budget my time in order to get some time to comment on some blogs that I usually follow. This is important because the people who blog and those who I follow know the significance of commenting on the information that they have written. By commenting on people blogs, it creates the opportunities of people engaging with me in return. This helps in creating a reciprocal network where we can respond to each other. Another way in which I have formed a reciprocity network is through sharing of valuable contacts with my social network. By doing this, I am able to keep a rational record of the people who are adding the most value to my social network. I tend to share the list with the people who have my interest and passion. For me to be able to form a reciprocity network, I have to ensure that I work smarter rather than harder.
Wadhwa, C (2007). Network analysis & Synthesis Technical Publications
Bakshi, V & Bakshi, U (2009). Network analysis New Age International
Viswanath, K & Rimer, B & Glanz, K (2008). Health behavior and health education John Wiley & Sons
David, A (2011). Community practice Oxford University Press
Wasserman, S (1994). Social network analysis Cambridge University Press
Can Human Be Replaced By Machine
The replacement of human beings by machines is a debatable issue in the society as people have different views regarding the issue. Some people believe that human beings can be replaced by machines because of deficit. On the other, other people believe human beings cannot be placed by machines. Arthur Clarke has examined the obsolescence of human beings in his article the obsolescence of man. He has provided various reasons why human beings are obsolete and should be replaced with machine (Clarke nd)
First, human beings should be replaced with machines because they lack essential senses needed to carry out some jobs. There are senses that cannot be offered by living structures and are needed immediately. No creature has developed organs to detect radioactivity and radiowaves in the world. There are some professions that can only be done by a magnetic field, electric beams and vacuum tubes. There are also other jobs that cannot be done by organic structures. Clarke claims that human beings waste energy on tearing down and rebuilding structures. For instance, energy is utilized in rebuilding instead of thinking. This has hindered human beings from performing some tasks (Clarke nd)
Second, the change of space will lead to replacement of human beings with machines. The space has changed significantly, and only a small portion of the space can be accessed directly by human beings. Human beings can only live without extensive protection and mechanical aid on a small section of the space. In addition, human beings cannot reach a large section of the space. Clarke claims that the space where human beings can live without space and pressure cabin is like a single room. Human beings will colonize most of the atoms, but the will spend a lot of energy on protecting their sensitive and weak bodies from gravity, pressure, temperature among other things. However, replacing man with machines will overcome the challenges. This is because the machines will be able to access the space. The machines will not need protection from gravity, temperatures and pressure like human beings and thus will be used to reach the space. Machines are able to wait for many years before reaching the space. Machines wait patiently for centuries to travel far areas of the universe. Man can only explore the space and control a portion of it. Nevertheless, only machines can conquer the space(Clarke nd)
In conclusion, human beings have weaknesses that impair them from carrying out certain tasks. Man does not have the senses and capabilities required to perform a certain job and this has hindered them from accomplishing the task. Man is unable to reach the entire space, but only a section of the space because of unfavorable conditions. Man does not also have senses needed to perform some of the vital tasks and hence this hinders him from achieving the goals set. Therefore, man should be replaced with machines as they have proved effective in performing certain tasks. Machines are not prone to unfavorable conditions. Some of the machines used by man to reach the space have been effective and enabled man to study the universe. Developing new instruments to use to complete tasks beyond man’s ability will enable successful completion of tasks (Clarke nd)
Clarke, A.C. The obsolescence of man. Science and the future.
Hawthorn Rappaccini’s Daughter Aldiss Super Toys Last All Summer Long
In this story, Aldiss has embraced the idea of technology in quite a peculiar way. Just from the titles it points out the unique and critical thoughts, he has endeavored to conceive. He calls his argumentative story “supertoys”, which indicated his passion for artificial objects and the comparison between them and the human beings. As a matter of justification, he describes an electronic gadget with a humanly character. He calls a family toaster David, which is entitled too much love and care as a family gadget. Furthermore, David the toaster is also entitled too much freedom and respect the kind that is offered to a normal human being. David is a blessing to the lives of the two couples Mr. and Mrs. Swinton. However, Aldiss describes David’s weakness that is portrayed by his lack of freed thoughts and emotions since he is just by toaster or a gadget. The author also points out about the affection David is given, which is not more than a gadget since he is just a “toaster” an instrument used by human beings for the satisfaction of their belly and pleasure. This argument tries to symbolically suggest that there is a wide distinction between the human and objects in terms of character description and the ability to think and act.
In addition, Aldiss compares two artificial gadgets with their different characteristics. He continues to say that the only difference between a taster and android is that of artificial intelligence. This is because the android has the artificial intelligence while the toaster operates systematically with the thermal heat regulator. Nonetheless, the android has no more consciousness of its own operations than has a toaster. This means that they are similar since they cannot reason on their own like a human being. A human being has the ability to reason on his or her own without being prompted. Aldiss argues that the two gadgets have one thing in common, and that is they are all programmed to work automatically, but they cannot make decisions on their own without being operated by the human. The idea at this point is that there is a difference portrayed between the two instruments, which is an indication of the humanly knowledge in creating two different operating instruments with different functionalities by similar characteristics of programming. The human are portrayed as intelligent creature that can be able to create gadgets that can operate within a specific program installed in them.
On a different note, Aldiss presents another comparison between the syntax and semantics where demonstrates the idea of free thoughts. These thoughts get manipulated with the purpose of getting the final result or sum. For example, the math student can learn the entire rule sand syntax necessary to manipulate a set of equations describing a certain situation. In this description, the author demonstrates how human beings can be creative in manipulating numbers for purposes of justifying a particular notion. Nevertheless, the numbers are just figure used in demonstrating some fundamental truths about some issues. People use the figures and numbers to justify their reasons for particular solutions. The meaning behind numbers is sometimes not valued because it does not offer a particular stance of it actuality. They are just used for justification of given syntax and mathematical situations, but not necessarily to disclose their actual meaning as numbers. This is an enormous insight that Aldiss endeavors to demonstrate to people with doubts about the meaning behind numbers or figures. As a matter of truth, the numbers are used in explaining some countable situations like distance, some natural changes sunlight and many more.
Additionally, the maths students have access to the syntax, but not semantics of the math query. The dissimilarity between the android and the student is that the android cannot thing independently while the student has independent brains that he uses in identifying a problem and finally solving it spontaneously. Furthermore, the student has an independent mind while the android does not have mind, but it only has a programmed system. This system cannot be able to create new ideas without the help of a person or an induced support. They are entirely dependent on the control offered to them by the people. The human action it attempting to help the android operate is the enormous difference between the android and the toaster. The android system and the toaster have just been designed to react as if they control their own self, but they work under a systematic program fashion in them. This dissimilarity differentiates the power of the human brain invested on artificial objects. The conversation between Mr. Swinton and David demonstrates an awkward display and only operates as a robot programmed to serve people as if it were a human. Aldiss demonstrates the relationship between instruments and people, which is meant to be beneficial for people other than the machine. The machine is only created to serve the master who is the human being through high tech programs installed in the robotic equipment. In addition, the robot cannot have authority over it s maker since it is just an objected created to serve its master the human.
The android or the toaster have no stimuli to operate the way they operate rather they are forced to work by the command programs systematically working in them. This indicates a big breach between the people and the technology they use. Today people have improved technologically that they are able to manufacture instruments that identify instructions electronically and through intensive programs installed it them. The significant thing to understand about a program designed in these systems is how they are fine tuned to operate. An android instrument has a series of programs operating within the memory system, which enables the gadget to operate effectively as per the instruction made on it. The machine s is created in such a way that they do not have the aptitude to feel freely, to be self-aware and to learn. Swinton’s love for the android and the toaster are just a symbolic affiliation being demonstrated by the author of the story. In actual sense, the author endeavors to demonstrate some kind of attention people give to objects they have created with their own hands. There are people who love their instruments or machine with an enormous love and passion. Aldiss also endeavors to answer the question about the human machine relationship issues that people keep wondering on how they mingle. The android and the toaster are the best examples the author has chosen to use in order to sell his idea on issues regarding technology exploitation and relations.
Aldiss B (2010) supertoys last all summer long retrieved on 15/4/13 from http://www.musingsbystarlight.com/2010/05/supertoys-last-all-summer-long-by-brian.html