Tuesday, January 12, 2010

What is Proxy Server?


Schematic representation of a proxy server, where the computer in the middle acts as the proxy server between the other two.



In computer networks, a proxy server is a server (a computer system or an application program) that acts as an intermediary for requests from clients seeking resources from other servers. A client connects to the proxy server, requesting some service, such as a file, connection, web page, or other resource, available from a different server. The proxy server evaluates the request according to its filtering rules. For example, it may filter traffic by IP address or protocol. If the request is validated by the filter, the proxy provides the resource by connecting to the relevant server and requesting the service on behalf of the client. A proxy server may optionally alter the client's request or the server's response, and sometimes it may serve the request without contacting the specified server. In this case, it 'caches' responses from the remote server, and returns subsequent requests for the same content directly.

A proxy server has many potential purposes, including:

* To keep machines behind it anonymous (mainly for security).
* To speed up access to resources (using caching). Web proxies are commonly used to cache web pages from a web server.[2]
* To apply access policy to network services or content, e.g. to block undesired sites.
* To log / audit usage, i.e. to provide company employee Internet usage reporting.
* To bypass security/ parental controls.
* To scan transmitted content for malware before delivery.
* To scan outbound content, e.g., for data leak protection.
* To circumvent regional restrictions.

A proxy server that passes requests and replies unmodified is usually called a gateway or sometimes tunneling proxy.

A proxy server can be placed in the user's local computer or at various points between the user and the destination servers on the Internet.

A reverse proxy is (usually) an Internet-facing proxy used as a front-end to control and protect access to a server on a private network, commonly also performing tasks such as load-balancing, authentication, decryption or caching.


Types and functions

Proxy servers implement one or more of the following functions:

Caching proxy server

A caching proxy server accelerates service requests by retrieving content saved from a previous request made by the same client or even other clients. Caching proxies keep local copies of frequently requested resources, allowing large organizations to significantly reduce their upstream bandwidth usage and cost, while significantly increasing performance. Most ISPs and large businesses have a caching proxy. These machines are built to deliver superb file system performance (often with RAID and journaling) and also contain hot-rodded versions of TCP. Caching proxies were the first kind of proxy server.

Some poorly-implemented caching proxies have had downsides (e.g., an inability to use user authentication). Some problems are described in RFC 3143 (Known HTTP Proxy/Caching Problems).

Another important use of the proxy server is to reduce the hardware cost. An organization may have many systems on the same network or under control of a single server, prohibiting the possibility of an individual connection to the Internet for each system. In such a case, the individual systems can be connected to one proxy server, and the proxy server connected to the main server.


Web proxy

A proxy that focuses on World Wide Web traffic is called a "web proxy". The most common use of a web proxy is to serve as a web cache. Most proxy programs provide a means to deny access to URLs specified in a blacklist, thus providing content filtering. This is often used in a corporate, educational or library environment, and anywhere else where content filtering is desired. Some web proxies reformat web pages for a specific purpose or audience, such as for cell phones and PDAs.

AOL dialup customers used to have their requests routed through an extensible proxy that 'thinned' or reduced the detail in JPEG pictures. This sped up performance but caused problems, either when more resolution was needed or when the thinning program produced incorrect results. This is why in the early days of the web many web pages would contain a link saying "AOL Users Click " to bypass the web proxy and to avoid the bugs in the thinning software.


Content-filtering web proxy
Further information: Content-control software

A content-filtering web proxy server provides administrative control over the content that may be relayed through the proxy. It is commonly used in both commercial and non-commercial organizations (especially schools) to ensure that Internet usage conforms to acceptable use policy. In some cases users can circumvent the proxy, since there are services designed to proxy information from a filtered website through a non filtered site to allow it through the users proxy.

Some common methods used for content filtering include: URL or DNS blacklists, URL regex filtering, MIME filtering, or content keyword filtering. Some products have been known to employ content analysis techniques to look for traits commonly used by certain types of content providers.

A content filtering proxy will often support user authentication, to control web access. It also usually produces logs, either to give detailed information about the URLs accessed by specific users, or to monitor bandwidth usage statistics. It may also communicate to daemon-based and/or ICAP-based antivirus software to provide security against virus and other malware by scanning incoming content in real time before it enters the network.

Anonymizing proxy server

An anonymous proxy server (sometimes called a web proxy) generally attempts to anonymize web surfing. There are different varieties of anonymizers. One of the more common variations is the open proxy. Because they are typically difficult to track, open proxies are especially useful to those seeking online anonymity, from political dissidents to computer criminals. Some users are merely interested in anonymity for added security, hiding their identities from potentially malicious websites for instance, or on principle, to facilitate constitutional human rights of freedom of speech, for instance. The server receives requests from the anonymizing proxy server, and thus does not receive information about the end user's address. However, the requests are not anonymous to the anonymizing proxy server, and so a degree of trust is present between that server and the user. Many of them are funded through a continued advertising link to the user.

Access control: Some proxy servers implement a logon requirement. In large organizations, authorized users must log on to gain access to the web. The organization can thereby track usage to individuals.

Some anonymizing proxy servers may forward data packets with header lines such as HTTP_VIA, HTTP_X_FORWARDED_FOR, or HTTP_FORWARDED, which may reveal the IP address of the client. Other anonymizing proxy servers, known as elite or high anonymity proxies, only include the REMOTE_ADDR header with the IP address of the proxy server, making it appear that the proxy server is the client. A website could still suspect a proxy is being used if the client sends packets which include a cookie from a previous visit that did not use the high anonymity proxy server. Clearing cookies, and possibly the cache, would solve this problem.

Hostile proxy

Proxies can also be installed in order to eavesdrop upon the dataflow between client machines and the web. All accessed pages, as well as all forms submitted, can be captured and analyzed by the proxy operator. For this reason, passwords to online services (such as webmail and banking) should always be exchanged over a cryptographically secured connection, such as SSL.

Intercepting proxy server

An intercepting proxy combines a proxy server with a gateway or router (commonly with NAT capabilities). Connections made by client browsers through the gateway are diverted to the proxy without client-side configuration (or often knowledge). Connections may also be diverted from a SOCKS server or other circuit-level proxies.

Intercepting proxies are also commonly referred to as "transparent" proxies, or "forced" proxies, presumably because the existence of the proxy is transparent to the user, or the user is forced to use the proxy regardless of local settings.

Purpose

Intercepting proxies are commonly used in businesses to prevent avoidance of acceptable use policy, and to ease administrative burden, since no client browser configuration is required. This second reason however is mitigated by features such as Active Directory group policy, or DHCP and automatic proxy detection.

Intercepting proxies are also commonly used by ISPs in some countries to save upstream bandwidth and improve customer response times by caching. This is more common in countries where bandwidth is more limited (e.g. island nations) or must be paid for.

Issues

The diversion / interception of a TCP connection creates several issues. Firstly the original destination IP and port must somehow be communicated to the proxy. This is not always possible (e.g. where the gateway and proxy reside on different hosts). There is a class of cross site attacks which depend on certain behaviour of intercepting proxies that do not check or have access to information about the original (intercepted) destination. This problem can be resolved by using an integrated packet-level and application level appliance or software which is then able to communicate this information between the packet handler and the proxy.

Intercepting also creates problems for HTTP authentication, especially connection-oriented authentication such as NTLM, since the client browser believes it is talking to a server rather than a proxy. This can cause problems where an intercepting proxy requires authentication, then the user connects to a site which also requires authentication.

Finally intercepting connections can cause problems for HTTP caches, since some requests and responses become uncacheble by a shared cache.

Therefore intercepting connections is generally discouraged. however due to the simplicity of deploying such systems, they are in widespread use.

Detecting

It is often possible to detect the use of an intercepting proxy server by comparing the client's external IP address to the address seen by an external web server, or sometimes by examining the HTTP headers received by a server. A number of sites have been created to address this issue (such as whatismyip.com), by reporting the user's IP address as seen by the site back to the user in a web page.

Transparent and non-transparent proxy server

The term "transparent proxy" is most often used incorrectly to mean "intercepting proxy" (because the client does not need to configure a proxy and cannot directly detect that its requests are being proxied). Transparent proxies can be implemented using Cisco's WCCP (Web Cache Control Protocol). This proprietary protocol resides on the router and is configured from the cache, allowing the cache to determine what ports and traffic is sent to it via transparent redirection from the router. This redirection can occur in one of two ways: GRE Tunneling (OSI Layer 3) or MAC rewrites (OSI Layer 2).

However, RFC 2616 (Hypertext Transfer Protocol—HTTP/1.1) offers different definitions:

"A 'transparent proxy' is a proxy that does not modify the request or response beyond what is required for proxy authentication and identification".
"A 'non-transparent proxy' is a proxy that modifies the request or response in order to provide some added service to the user agent, such as group annotation services, media type transformation, protocol reduction, or anonymity filtering".

A security flaw in the way that transparent proxies operate was published by Robert Auger in 2009 and advisory by the Computer Emergency Response Team was issued listing dozens of affected transparent, and intercepting proxy servers.

Forced proxy

The term "forced proxy" is ambiguous. It means both "intercepting proxy" (because it filters all traffic on the only available gateway to the Internet) and its exact opposite, "non-intercepting proxy" (because the user is forced to configure a proxy in order to access the Internet).

Forced proxy operation is sometimes necessary due to issues with the interception of TCP connections and HTTP. For instance, interception of HTTP requests can affect the usability of a proxy cache, and can greatly affect certain authentication mechanisms. This is primarily because the client thinks it is talking to a server, and so request headers required by a proxy are unable to be distinguished from headers that may be required by an upstream server (esp authorization headers). Also the HTTP specification prohibits caching of responses where the request contained an authorization header.

Suffix proxy

A suffix proxy server allows a user to access web content by appending the name of the proxy server to the URL of the requested content (e.g. "en.wikipedia.org.6a.nl").

Suffix proxy servers are easier to use than regular proxy servers. The concept appeared in 2003 in form of the IPv6Gate and in 2004 in form of the Coral Content Distribution Network, but the term suffix proxy was only coined in October 2008 by "6a.nl"[citation needed].

Open proxy server
Main article: Open proxy

Because proxies might be used to abuse, system administrators have developed a number of ways to refuse service to open proxies. Many IRC networks automatically test client systems for known types of open proxy. Likewise, an email server may be configured to automatically test e-mail senders for open proxies.

Groups of IRC and electronic mail operators run DNSBLs publishing lists of the IP addresses of known open proxies, such as AHBL, CBL, NJABL, and SORBS.

The ethics of automatically testing clients for open proxies are controversial. Some experts, such as Vernon Schryver, consider such testing to be equivalent to an attacker portscanning the client host. Others consider the client to have solicited the scan by connecting to a server whose terms of service include testing.

Reverse proxy server
Main article: Reverse proxy

A reverse proxy is a proxy server that is installed in the neighborhood of one or more web servers. All traffic coming from the Internet and with a destination of one of the web servers goes through the proxy server. There are several reasons for installing reverse proxy servers:

* Encryption / SSL acceleration: when secure web sites are created, the SSL encryption is often not done by the web server itself, but by a reverse proxy that is equipped with SSL acceleration hardware. See Secure Sockets Layer. Furthermore, a host can provide a single "SSL proxy" to provide SSL encryption for an arbitrary number of hosts; removing the need for a separate SSL Server Certificate for each host, with the downside that all hosts behind the SSL proxy have to share a common DNS name or IP address for SSL connections.
* Load balancing: the reverse proxy can distribute the load to several web servers, each web server serving its own application area. In such a case, the reverse proxy may need to rewrite the URLs in each web page (translation from externally known URLs to the internal locations).
* Serve/cache static content: A reverse proxy can offload the web servers by caching static content like pictures and other static graphical content.
* Compression: the proxy server can optimize and compress the content to speed up the load time.
* Spoon feeding: reduces resource usage caused by slow clients on the web servers by caching the content the web server sent and slowly "spoon feeding" it to the client. This especially benefits dynamically generated pages.
* Security: the proxy server is an additional layer of defense and can protect against some OS and WebServer specific attacks. However, it does not provide any protection to attacks against the web application or service itself, which is generally considered the larger threat.
* Extranet Publishing: a reverse proxy server facing the Internet can be used to communicate to a firewalled server internal to an organization, providing extranet access to some functions while keeping the servers behind the firewalls. If used in this way, security measures should be considered to protect the rest of your infrastructure in case this server is compromised, as its web application is exposed to attack from the Internet.

Tunneling proxy server

A tunneling proxy server is a method of defeating blocking policies implemented using proxy servers. Most tunneling proxy servers are also proxy servers, of varying degrees of sophistication, which effectively implement "bypass policies".

A tunneling proxy server is a web-based page that takes a site that is blocked and "tunnels" it, allowing the user to view blocked pages. A famous example is elgooG, which allowed users in China to use Google after it had been blocked there. elgooG differs from most tunneling proxy servers in that it circumvents only one block.

A September 2007 report from Citizen Lab recommended Web based proxies Proxify, StupidCensorship, and CGIProxy. Alternatively, users could partner with individuals outside the censored network running Psiphon or Peacefire/tunneling proxy server. A more elaborate approach suggested was to run free tunneling software such as FreeGate, or pay services Anonymizer and Ghost Surf.Also listed were free application tunneling software Gpass and HTTP Tunnel, and pay application software Relakks and Guardster. Lastly, anonymous communication networks JAP ANON, Tor, and I2P offer a range of possibilities for secure publication and browsing.

Other options include Garden and GTunnel by Garden Networks.

Students are able to access blocked sites (games, chatrooms, messenger, offensive material, internet pornography, social networking, etc.) through a tunneling proxy server. As fast as the filtering software blocks tunneling proxy servers, others spring up. However, in some cases the filter may still intercept traffic to the tunneling proxy server, thus the person who manages the filter can still see the sites that are being visited.

Tunneling proxy servers are also used by people who have been blocked from a web site.

Another use of a tunneling proxy server is to allow access to country-specific services, so that Internet users from other countries may also make use of them. An example is country-restricted reproduction of media and webcasting.

The use of tunneling proxy servers is usually safe with the exception that tunneling proxy server sites run by an untrusted third party can be run with hidden intentions, such as collecting personal information, and as a result users are typically advised against running personal data such as credit card numbers or passwords through a tunneling proxy server.

In some network configurations, clients attempting to access the proxy server are given different levels of access privilege on the grounds of their computer location or even the MAC address of the network card. However, if one has access to a system with higher access rights, one could use that system as a proxy server for which the other clients use to access the original proxy server, consequently altering their access privileges.

Content filter

Many work places, schools, and colleges restrict the web sites and online services that are made available in their buildings. This is done either with a specialized proxy, called a content filter (both commercial and free products are available), or by using a cache-extension protocol such as ICAP, that allows plug-in extensions to an open caching architecture.

Requests made to the open internet must first pass through an outbound proxy filter. The web-filtering company provides a database of URL patterns (regular expressions) with associated content attributes. This database is updated weekly by site-wide subscription, much like a virus filter subscription. The administrator instructs the web filter to ban broad classes of content (such as sports, pornography, online shopping, gambling, or social networking). Requests that match a banned URL pattern are rejected immediately.

Assuming the requested URL is acceptable, the content is then fetched by the proxy. At this point a dynamic filter may be applied on the return path. For example, JPEG files could be blocked based on fleshtone matches, or language filters could dynamically detect unwanted language. If the content is rejected then an HTTP fetch error is returned and nothing is cached.

Most web filtering companies use an internet-wide crawling robot that assesses the likelihood that a content is a certain type (i.e. "This content is 70% chance of porn, 40% chance of sports, and 30% chance of news" could be the outcome for one web page). The resultant database is then corrected by manual labor based on complaints or known flaws in the content-matching algorithms.

Web filtering proxies are not able to peer inside secure sockets HTTP transactions, assuming the chain-of-trust of SSL/TLS has not been tampered with. As a result, users wanting to bypass web filtering will typically search the internet for an open and anonymous HTTPS transparent proxy. They will then program their browser to proxy all requests through the web filter to this anonymous proxy. Those requests will be encrypted with https. The web filter cannot distinguish these transactions from, say, a legitimate access to a financial website. Thus, content filters are only effective against unsophisticated users.

As mentioned above, the SSL/TLS chain-of-trust does rely on trusted root certificate authorities; in a workplace setting where the client is managed by the organization, trust might be granted to a root certificate whose private key is known to the proxy. Concretely, a root certificate generated by the proxy is installed into the browser CA list by IT staff. In such scenarios, proxy analysis of the contents of a SSL/TLS transaction becomes possible. The proxy is effectively operating a man-in-the-middle attack, allowed by the client's trust of a root certificate the proxy owns.

A special case of web proxies is "CGI proxies". These are web sites that allow a user to access a site through them. They generally use PHP or CGI to implement the proxy functionality. These types of proxies are frequently used to gain access to web sites blocked by corporate or school proxies. Since they also hide the user's own IP address from the web sites they access through the proxy, they are sometimes also used to gain a degree of anonymity, called "Proxy Avoidance".

What is T1?


Two Network Interface Units. On the left with a single card, the right with two.













T1(T-Carrier)
In telecommunications, T-carrier, sometimes abbreviated as T-CXR, is the generic designator for any of several digitally multiplexed telecommunications carrier systems originally developed by Bell Labs and used in North America, Japan, and Korea.

The basic unit of the T-carrier system is the DS0, which has a transmission rate of 64 kbit/s, and is commonly used for one voice circuit.

The most common legacy of this system is the line rate speeds. "T1" now means any data circuit that runs at the original 1.544 Mbit/s line rate. Originally the T1 format carried 24 pulse-code modulated, time-division multiplexed speech signals each encoded in 64 kbit/s streams, leaving 8 kbit/s of framing information which facilitates the synchronization and demultiplexing at the receiver. T2 and T3 circuit channels carry multiple T1 channels multiplexed, resulting in transmission rates of 6.312 and 44.736 Mbit/s, respectively.

Higher T

In the late 1960s and early 1970s Bell Labs developed higher rate systems. T-1C with a more sophisticated modulation scheme carried 3 Mbit/s, on those balanced pair cables that could support it. T-2 carried 6.312 Mbit/s, requiring a special low-capacitance cable with foam insulation. This was standard for Picturephone. T-4 and T-5 used coaxial cables, similar to the old L-carriers used by AT&T Long Lines. TD microwave radio relay systems were also fitted with high rate modems to allow them to carry a DS1 signal in a portion of their FM spectrum that had too poor quality for voice service. Later they carried DS3 and DS4 signals. Later optical fiber, typically using SONET transmission scheme, overtook them.

What is E1?

E1(E Carrier)
In digital telecommunications, where a single physical wire pair can be used to carry many simultaneous voice conversations, worldwide standards have been created and deployed. The European Conference of Postal and Telecommunications Administrations (CEPT) originally standardized the E-carrier system, which revised and improved the earlier American T-carrier technology, and this has now been adopted by the International Telecommunication Union Telecommunication Standardization Sector (ITU-T). This is now widely used in almost all countries outside the USA, Canada and Japan.

The E-carrier standards form part of the Plesiochronous Digital Hierarchy (PDH) where groups of E1 circuits may be bundled onto higher capacity E3 links between telephone exchanges or countries. This allows a network operator to provide a private end-to-end E1 circuit between customers in different countries that share single high capacity links in between.

In practice, only E1 (30 circuit) and E3 (480 circuit) versions are used. Physically E1 is transmitted as 32 timeslots and E3 512 timeslots, but one is used for framing and typically one allocated for signalling call setup and tear down. Unlike Internet data services, E-carrier systems permanently allocate capacity for a voice call for its entire duration. This ensures high call quality because the transmission arrives with the same short delay (Latency) and capacity at all times.

E1 circuits are very common in most telephone exchanges and are used to connect to medium and large companies, to remote exchanges and in many cases between exchanges. E3 lines are used between exchanges, operators and/or countries, and have a transmission speed of 34.368 Mbit/s.

What is STM and ATM?

Asynchronous Transfer Mode (ATM) is a standardized digital data transmission technology. ATM is implemented as a network protocol and was first developed in the mid 1980s.[1] The goal was to design a single networking strategy that could transport real-time video conference and audio as well as image files, text and email.[2] The International Telecommunications Union, American National Standards Institute, European Telecommunications Standards Institute, ATM Forum, Internet Engineering Task Force, Frame Relay Forum and SMDS Interest Group were involved in the creation of the standard.[3]

Asynchronous Transfer Mode is a cell-based switching technique that uses asynchronous time division multiplexing.[4][5] It encodes data into small fixed-sized cells (cell relay) and provides data link layer services that run over OSI Layer 1 physical links. This differs from other technologies based on packet-switched networks (such as the Internet Protocol or Ethernet), in which variable sized packets (known as frames when referencing Layer 2) are used. ATM exposes properties from both circuit switched and small packet switched networking, making it suitable for wide area data networking as well as real-time media transport.[6] ATM uses a connection-oriented model and establishes a virtual circuit between two endpoints before the actual data exchange begins.[7]

ATM is a core protocol used over the SONET/SDH backbone of the Integrated Services Digital Network.


STM-1 (Synchronous Transport Module level-1) is the SDH ITU-T fiber optic network transmission standard. It has a bit rate of 155.52 Mbit/s. The other levels are STM-4, STM-16 and STM-64. Beyond this we have wavelength-division multiplexing (WDM) commonly used in submarine cabling.

Digital Signal 0 (DS0) and Digital Signal 3 (DS3)

Digital Signal 0 (DS0) is a basic digital signalling rate of 64 kbit/s, corresponding to the capacity of one voice-frequency-equivalent channel.

Because of its fundamental role in carrying a single phone call, the DS0 rate forms the basis for the digital multiplex transmission hierarchy in telecommunications systems used in North America. To limit the number of wires required between two involved in exchanging voice calls, a system was built in which multiple DS0s are multiplexed together on higher capacity circuits. In this system, twenty-four (24) DS0s are multiplexed into a DS1 signal. Twenty-eight (28) DS1s are multiplexed into a DS3. When carried over copper wire, this is the well-known T-carrier system, with T1 and T3 corresponding to DS1 and DS3, respectively.

Besides its use for voice communications, the DS0 rate may support twenty 2.4 kbit/s channels, ten 4.8 kbit/s channels, five 9.67 kbit/s channels, one 56 kbit/s channel, or one 64 kbit/s clear channel.


Digital Signal 3 (DS3) is a digital signal level 3 T-carrier. It may also be referred to as a T3 line.

* The data rate for this type of signal is 44.736 Mbit/s.
* This level of carrier can transport 28 DS1 level signals within its payload.
* This level of carrier can transport 672 DS0 level channels within its payload.

Usage
The level of transport or circuit is mostly used between telephony carriers, both wired and wireless.

Saturday, January 9, 2010

How to fix Black Screen Of Death?


Windows users may be familiar with the "blue screen of death," which occurs when their computers essentially shutdown because of an operating system problem. The new "black screen of death" appears to occur when the computer is first turned on, then shuts down.

British security firm Prevx offered a solution to this problem on their blog:

1) Restart your PC
2) Log on and wait for the black screen to appear
3) Make sure your PC should be able to connect to the Internet (black screen does not appear to affect this)
4) Press the CTRL, ALT and DEL keys simultaneously
5) When prompted, Click Start Task Manager
6) In Task Manager Click on the Application Tab
7) Next Click New Task
8) Now enter the command:
"C:Program FilesInternet Exploreriexplore.exe" "http://info.prevx.com/download.asp?GRAB=BLACKSCREENFIX"
9) Click OK and your (Web) browser should start up and begin the download process
10) When prompted for the download Click run, the black screen fix program will download and run to automatically fix the issue.
11) Now restart your PC and the black screen problem will hopefully be gone.

Tuesday, January 5, 2010

Jude 1:3 (New International Version)

The sin and doom of Godless men
3Dear friends, although I was very eager to write to you about the salvation we share, I felt I had to write and urge you to contend for the faith that was once for all entrusted to the saints.

how to fix Blue Screen Of Death?






We all must have come across a situation when suddenly working on the system, we notice that a “Blue screen” comes up reading “Physical Memory Dump” and also showing some hexadecimal numbers. This is like a nightmare to all computer users and we all fear this problem, don’t we?

There are many problems that cause this problem. Most common of them are:


1. Software Failure:
Probably due to software IO error or a bad driver, usually this can be solved via reformatting the computer or reinstalling the specific driver.

2. RAM Failure:

This problem is mostly caused if there is some problem in your RAM installed in the computer. So just try changing it with any of your friend’s but with similar RAM configuration.

3. Hardware Failure:

The problem can arise due to some conflict between the system and any new hardware that you have installed on your system. This can be hard disk, mouse or any hardware. So unplug the new hardware you installed.

4. Registry Problems:

This problem can also be caused if there is some error in your registry values (caused by some illegal software or a virus). If you do not know the values then formatting can be the only option.

This is how you can disable the computer to restart in case of Serious Errors and the formation of Memory Dump:

• Open My Computer properties dialog and click the Advanced tab.

• Then click the Settings button in the Start-up and Recovery section.



• Click to remove the check next to Automatically Restart checkbox.

• In the Write debugging information tab, click to select (none) and then click OK.