Posts Tagged ‘Technology’

Source: http://reviews.cnet.com/digital-camera-buying-guide/

 

(Credit: Sarah Tew/CNET)

 

For many people, buying a camera isn’t an easy thing to do. It’s not really a one-model-fits-all kind of product, so there’s not just a single camera you can point to and say, “Buy this!”

In fact, it’s the opposite; with such a range of types, sizes, features, and prices, unless you know your exact needs, you could very well end up disappointed with your purchase. And that’s what this guide is all about: Helping you make the best camera purchase for your needs and budget.

For people who just want some good recommendations, hit the slideshow below for some of our top choices or check out our lists of best cameras by category. Otherwise, read on for our advice.

In a rush? Our top camera picks (pictures)

1-2 of 6
Scroll LeftScroll Right

The most important stuff

  1. There is no spec that tells you which camera is best. A higher resolution (i.e., more megapixels) or bigger zoom range doesn’t make the camera better. I’ll repeat: you’re never looking for the camera with the most megapixels or longest zoom.
  2. Don’t get hung up on making sure you’ve got the “best” in a particular class. The truth is, one camera rarely bests the rest on all four major criteria — photo quality, performance, features, and design. (You may have noticed how few Editors’ Choice Awards we give for cameras. That’s partly why.) At least not at a friendly price. You want something best for you. And that may mean, for example, that it doesn’t produce stellar photo quality, or at least photos that pixel peepers think are stellar quality.
  3. Try before you buy. Make sure it fits comfortably in your hand and that it’s not so big or heavy that you’ll leave it at home. It should provide quick access to the most commonly used functions, and menus should be simply structured, logical, and easy to learn. Touch-screen models can allow for greater functionality, but can also be frustrating if the controls and menus are poorly organized or the screen can’t be calibrated to your touch.

For more general buying advice, check out our steps to the perfect camera purchase.

What type of camera?

If you don’t understand any of the terms or their implications, jump down to the Key Specssection below.

 

Point and shoot (budget)

Less than $200.

Who it’s for Key characteristics Image quality and performance
Anyone who wants something that’s a step up from a camera phone. Pocketable; lens fixed to body; zoom range usually less than 15x; small sensor; designed for mostly automatic operation Good enough for snapshots and social media, short vacation and kids video clips, and fast enough for food and the occasional good shot of kids and pets in action.

Compact megazoom

$200 – $350

Who it’s for Key characteristics Image quality and performance
Those who want a step up from a camera phone but frequently can’t get close enough to get the photograph that’s wanted. Pocketable; lens fixed to body; zoom range usually more than 20x; small sensor; designed for automatic and some manual operation Better quality than a point-and-shoot; fast enough for kids and pets, short vacation, and kids video clips.

Megazoom

$350 – $500

Who it’s for Key characteristics Image quality and performance
People who want one camera that can shoot both close-ups and players’ faces from the nosebleed seats. Big, with a small sensor; lens fixed to body; zoom range usually more than 26x; designed for automatic and some manual operation. The less-expensive models lack an EVF.

These are sometimes misleadingly referred to asbridge cameras, as in bridging the gap between a compact and a dSLR. But despite their size and appearance, they have nothing in common with dSLRs; on the inside, they’re pure point-and-shoot.

Equivalent photo and video quality to a point-and-shoot, fast enough for the accidental action shot but mostly slow-moving subjects.

Enthusiast compact

$400 – $2,800

Who it’s for Key characteristics Image quality and performance
People who enjoy photography and like to play with settings but want something unobtrusive. Fits in a jacket pocket; lens fixed to body; small zoom range; medium-to-large sensor; some models have reverse Galilean optical viewfinders; designed for manual with some automatic operation. Photo quality good enough for those who want to get artsy and/or possibly sell their photos; short video clips; fast enough for shooting food but usually not action.

Entry-level interchangeable-lens camera (ILC)

$400 – $600

Who it’s for Key characteristics Image quality and performance
People who want something better and faster than a compact, but still want it as small as possible. Small enough to fit into a pocketbook; interchangeable lens; sensor sizes range from compact-camera-equivalent to those you find in dSLRs; designed for automatic and some manual operation. Usually no EVF or EVF optional. Comparable photo quality to an entry-level dSLR, better video quality than most compacts and point-and-shoots; fast enough for photographing kids and pets in motion.

Entry-level dSLR $500 – $1,000 (with lens)

Who it’s for Key characteristics Image quality and performance
Anyone who wants better speed and quality than a compact and prefers shooting using an optical viewfinder. Big, with a relatively large APS-C sensor; interchangeable lenses; TTL optical viewfinder; designed for either manual or automatic operation. Comparable photo quality to an entry-level ILC; video quality varies significantly across brands; fast enough for photographing active kids and pets.

Prosumer ILC $700+ (with lens)

Who it’s for Key characteristics Image quality and performance
People who enjoy photography and videography and like to play with settings and lenses but want something unobtrusive. Small enough to fit into a pocketbook; interchangeable lens; sensor sizes range from compact-camera-equivalent to those you find in dSLRs; designed for manual and some automatic operation; has EVF. Comparable photo quality to a prosumer dSLR; suitable for people who want to get artsy and/or possibly sell their photos; video quality varies significantly across brands, but can be good enough for indie videographers; fast enough for photographing active kids and pets.

Prosumer dSLR $1,000+ (body only)

Who it’s for Key characteristics Image quality and performance
Advanced photographers who need speed and quality, as well as professionals with a tight a budget or who need secondary bodies. Big, with a relatively large APS-C or full-frame sensor; interchangeable lens; designed for manual operation; has TTL optical viewfinder. Comparable photo quality to a prosumer dSLR; suitable for those who want to get artsy and/or possibly sell their photos; video quality varies but can be good enough for indie videographers; fast enough for photographing sports-fast action.

Pro dSLR $1,200+ (body only)

Who it’s for Key characteristics Image quality and performance
For people who need a reliable, durable, fully configurable and consistent camera that delivers best- quality images and perhaps fast action-shooting level performance. Big, with a large APS-C or full-frame (or bigger) sensor; interchangeable lenses; optical viewfinder; designed for fully manual operation. Photo and video quality that’s good enough to sell to a knowledgeable buyer; performance fast enough to shoot sports or a bride fleeing the altar.

 

How much zoom?

A longer focal length lens lets you get closer without moving; for example, at 250mm you can see the observation deck of the Empire State Building, while at 1,000mm you can start to make out tiny people. In order to accommodate both wide-angle shots of an entire scene as well as long-distance close-ups, manufacturers have been making lenses with bigger and bigger zoom ranges. There are tradeoffs for this convenience, though. For one, it’s hard to keep a subject in the frame when you’re shooting at extreme telephoto. And a lens that has to be a jack-of-all focal lengths is generally a master of none of them. Generally, you probably don’t need more than 20x.

10x zoom, 25 to 250mm

42x zoom, 24 to 1,000mm

 

Key specs

Resolution
Generally referred to in megapixels. This number tells you how many pixels the camera uses to produce an image. Every modern camera has more than enough for any need. That’s why it’s not important as a spec. In fact, watch out for cheap cameras with high resolutions — they usually lack the processing power to deal with the large images, which can slow them down.

Lens 
There are two important specs related to all lenses: aperture and focal length(s). The lens’ focal length, measured in millimeters, conveys the magnification of the image and the amount of scene covered by the lens (called the angle of view). As focal length increases, things look bigger and take up more of the frame. A lens that covers multiple focal lengths is a zoom lens, and the zoom spec is the ratio of the longest to the shortest focal length: a 20-100mm lens, therefore, has a 5x zoom. A lens of a single focal length is called a prime lens, and very flat ones are usually referred to as pancake primes. Note that the focal lengths as imprinted on the lenses of compact cameras will not be the same as the reported focal lengths; they don’t reflect a multiplier that normalizes the length based on a frame of 35mm film, a reference point that adjusts for the multitude of sensor sizes in cameras. Sometimes called the crop factor, you really only need to think about it when looking at lenses for interchangeable-lens cameras.

  • Ultra wide angle (less than 18mm) is good for very large scenes where lens distortion adds rather than detracts from the appeal.
  • Wide-angle (around 18mm to 30mm) is good for group shots, landscapes, and street photography
  • Normal (about 30mm to 70mm) is good for portraits and snapshots
  • Telephoto (about 70mm to 300mm) is good for portraits and sports
  • Super telephoto (greater than 300mm) is good for sports, wildlife and stalking

The aperture is the size of the opening that lets in light, alternatively referred to as an f-stop or f number. The lower the number the larger the aperture. The largest aperture usually varies over the zoom range; lens specs generally list the maximum aperture at the shortest and longest focal lengths. Thus, when the spec is listed as 18-55mm f3.5-5.6, that means the widest aperture is f3.5 at 18mm and f5.6 at 55mm. As aperture size increases, the area of sharpness in front of and behind the subject increases; area of sharpness is called depth of field. Since wider apertures let in more light and give you more control over depth of field, wider is better.

A lens with a wide aperture is referred to as fast or bright and one with a narrow aperture isslow. Fast lenses are considered better than slow lenses; confusingly “fast” and “slow” have nothing to do with focusing performance. Also, watch out for lenses that start wide but get narrow very quickly. For instance, with a 24-120mm f2-5.9 lens you don’t want the maximum aperture to jump from f2 at 24mm to f5.9 at 28mm.

Sensor size and type 
Sensor size is the dimensions of the array of photoreceptors that create the pixels that become an image. Bigger sensors generally produce better photo quality, but the bigger the sensor the bigger the camera — a larger sensor also requires a larger lens, more space for supporting electronics, and if the camera uses sensor-shift image stabilization, has an even larger footprint. Larger sensors are also more expensive to make, so the cameras are pricier.

Sensor sizes are usually indicated in one of two ways: actual dimensions in millimeters or with labels such as “1/1.7-inch.” The latter is an old convention from the early days of digital video, and don’t represent actual sizes; 1/1.7 inch isn’t equal to 0.59 inch, for example. However, they are accurate in a relative sense — i.e., 1/1.7 inch is smaller than 2/3 inch. The sensors in point-and-shoot cameras are small at 1/2.3-inch, and those in camera phones even smaller, typically 1/3- or 1/3.2-inch.

The most commonly used CFA, the Bayer pattern.

There are a few primary sensor technologies. CMOS is the most popular. A variant, BSI CMOS (backside illuminated) is popular for compact cameras because it allows for greater low-light sensitivity on a relatively small sensor. However, the image quality in good light usually doesn’t quite match that of traditional CMOS sensors. There are some manufacturer-specific variations of these as well, usually with different arrangements of the on-chip color filter array (CFA), which separates the incoming light into red, green and blue primaries that later get recombined to form the colors in the image. The most common CFA is the Bayer array; some CFAs have extra green-capturing sites, because green carries the most detail information (it’s a human eye thing), such as Fujifilm’s X-Trans, and Sigma’s Foveon-based technology stacks the filters so that each pixel processes each color primary.

Cheaper point-and-shoots still use CCD (charge-coupled device) sensor technology. Inexpensive CCDs don’t deliver photo quality as nice as pricier CMOS sensors, but conversely, expensive CCDs like those used in medium-format cameras produce better photos. In general, CCDs are slow and poor for video.

Light sensitivity
A camera’s sensitivity to light is specified as ISO sensitivity; the higher the number, the better the camera’s ability to shoot in low light. However, as sensitivity rises so does the amount of noise — those colored speckles you see in night shots. Cameras perform noise suppression to try to eliminate it, but that can result in smeary-looking artifacts. As a result, few cameras perform usably at the top of their rated ISO sensitivity ranges, making an unreliable spec. If you take it with a big grain of salt you can usually guess at the maximum usable sensitivity; for instance, a camera rated up to ISO 6400 will probably produce decent images up to ISO 800.

Viewfinder
While most consumer cameras these days have eliminated a viewfinder altogether, more-advanced models still have them. They’re useful when it’s hard to read an LCD in sunlight, and holding the camera up to your eye forces you into a more stable body position for shooting. There are basically three types of viewfinders: the type that used to be found on film point-and-shoots which gives you a direct view of the scene rather than a through-the-lens (TTL) view called a reverse Galilean; an electronic viewfinder or EVF; and the TTL optical viewfinder found on dSLRs. EVFs have an advantage when shooting video, as you can’t simultaneously view and record video using a TTL viewfinder, plus they can simulate what the photo will look like. On the other hand, optical viewfinders are better for shooting action, though they have a tiny blackout period between shots, an EVF can only show you the action once it’s already happened, not while it’s in progress. Some EVFs are better than others for this, however. Important viewfinder specs are percentage coverage, or how much of the scene they can display — 100 percent is best, obviously — and effective magnification, which tells you how big the image looks in the viewfinder. A good viewfinder will also have a diopter adjustment, to fine tune the viewfinder focus for your vision or for glasses wearers.

Image stabilization (IS)
This is what keeps your photos from displaying camera shake. There are two physical types: in-camera sensor shift and in-lens optical. While they perform similarly, optical IS seems to work a little better while shooting video, but sensor-shift means that for interchangeable-lens models you don’t have to wait for the manufacturer to put IS in the lens and the lenses will likely cost less and be a little smaller. Cheaper cameras may have electronic IS, which uses a combination of fast shutter speed and higher ISO sensitivities to help with motion blur. Unfortunately, this increases image noise and is less effective in low lighting.

Battery life and type 
Most cameras use lithium ion rechargeable battery packs. While they offer greater battery life than readily available AA — size batteries, they are generally designed for a specific make or model of camera. There are models using AA batteries, but they’re usually lower-end compacts and larger megazoom cameras. When buying a camera, check out how many shots its battery has been rated for, a specification that has been standardized by CIPA.

Burst/continuous shooting rate
A measure of the number of frames per second a camera can capture, this spec can get quite confusing. Optimally, you want a high frame rate, at full resolution, with autofocus and autoexposure, for a reasonable number of frames. In order to report a high frame rate, the most common spec, companies play fast and loose with the other variables; so, for example, they’ll say the camera does 10 frames per second (fps) — but that’s for 10 frames (i.e., 1 second), with exposure and autofocus fixed at the first frame, while the usable burst rate will be closer to 5fps.

Video
For typical vacation videos or videos of the kids, you want 1080/30p — “1080” refers to 1,920×1,080-pixel resolution, also referred to as Full HD, whereas “30p” stands for 30fps progressive video. These days, you should stay away from 60i — 60fps interlaced — as it has more visible artifacts than even 24p. If a camera offers a frame rate greater than 60fps, that lets you create slow-motion videos. As for codecs, the algorithms that compress and decompress the video, look for a real codec like H.264 or AVCHD, which are subsets of MPEG-4, rather than Motion JPEG. The actual video files have formats like MOV (QuickTime), AVI (Microsoft Audio/Video Interleave), MP4, and MTS (AVCHD). Video recording also has a bit rate, the amount of data it encodes per second of video; for this, higher is generally better. Because AVCHD is really a playback specification, it’s a lot less flexible with respect to available bit rates than H.264 MPEG-4.

Shooting modes 
Check out this discussion of the various shooting features.

Other features

GPS
If you love knowing exactly where you were when you took a photo, you’ll want a camera with a built-in GPS (global positioning system) receiver. Typically found in rugged or higher-end cameras (add-on receivers are also available for some ILC and dSLR cameras), the GPS receiver uses satellite positioning to tag your pictures with location data. This location data can be read by software such as Google Earth or Picasa as well as photo-sharing sites to map where the photos were taken.

Depending on the camera’s capabilities, the GPS may also be used to tag photos with landmark information, set the camera’s clock to local time, track your path on a map as you shoot, or even help with basic navigation on foot.

The biggest downside is that it will drain your battery faster as it has to be left on so it can continue to update your location. It also won’t work indoors or, in rugged cameras, underwater. It will add to the cost of the camera, too.

One last note: Though some models state that they tag video with location information, the data is attached to the video as a separate file instead of being embedded as it is with photos. Generally this means the location information can only be viewed if the videos are played directly from the camera or with bundled software.

Wi-Fi
A few years ago, digital cameras with built-in Wi-Fi didn’t make much sense. It was basically no better than using a USB cable, and a really slow one at that. Now, with more people using smartphones and mobile hot spots, a camera with Wi-Fi offers more than just slow wireless backup.

 

The main function is still to wirelessly transfer photos and videos off the camera, but new models can back up straight to cloud services or networked computers as well as connect directly to a mobile device, so you can view, transfer, and edit shots, and then upload to sharing sites over your devices mobile broadband. Some models use Wi-Fi to remotely control the camera, too, using your mobile device’s display as a viewfinder. It can also be used to piggyback on your smartphone’s GPS receiver for tagging photos with location data.

Samsung’s WB850F is one of several Wi-Fi-enabled cameras available from the manufacturer.

What this means is you can get things your smartphone’s camera can’t offer (e.g. better photo and video quality, a zoom lens, and more control) and still share on the go. Unfortunately, manufacturers currently use Wi-Fi as an upsell or add-on, so you many not be able to find the model you want with an option for Wi-Fi. In these cases, consider an Eye-Fi wireless SD card. These work like regular SD memory cards for storage, but also have a built-in Wi-Fi radio for wireless backups and transfers to Web sites, mobile devices, and computers.

Advertisements

ImageAs the guy who reviews networking products, I generally receive a couple of e-mails from readers a day, and most of them, in one way or another, are asking about the basics of networking (as in computer to computer, I am not talking about social networks here.)

Don’t get me wrong, I appreciate e-mails because, at the very least, it gives me the impression that there are real people out there amid the sea of spam. But I’d rather not keep repeating myself. So instead of saying the same thing over and over again in individual e-mails, I’ll talk all about home networking basics, in layman’s terms, in this post.

Advanced and experienced users won’t need this, but for the rest, I’d recommend reading the whole thing, and if you want to quickly find out what a networking term means, you can search for it here.

1. Wired networking
A wired local network is basically a group of devices connected to one another using network cables, more often than not, with the help of a router, which brings us to the very first networking term.

Router: This is the central device of a home network that you can plug one end of a network cable into. The other end of the cable goes into a networking device that has a network port. If you want to add more network devices to a router, you’ll need more cables and more ports on the router. These ports, both on the router and on the end devices, are called Local Area Network (LAN) ports. They are also known as RJ45 ports. The moment you plug a device into a router, you have yourself a wired network. Networking devices that come with an RJ45 network port are called Ethernet-ready devices. More on this below.

LAN ports: A home router usually has four LAN ports, meaning that out of the box it can host a network of up to four wired networking devices. If you want to have a larger network, you will need to resort to a switch (or a hub), which adds more LAN ports to the router. Generally a home router can handle up to about 250 networking devices, and the majority of homes and even small businesses don’t need more than that. There are currently two main speed standards for LAN ports: Ethernet, which caps at 100Mbps (or about 13MBps), and Gigabit Ethernet, which caps at 1Gbps (or about 125MBps). In other words, it takes about a minute to transfer a CD’s worth of data (around 700MB or about 250 digital songs) over an Ethernet connection. With Gigabit Ethernet, the same job takes just about 5 seconds. In real life, the average speed of an Ethernet connection is about 8MBps, and of a Gigabit Ethernet connection is somewhere between 45 and 80MBps. The actual speed of a network connection depends on many factors, such as the end devices, the quality of the cable, the amount of traffic, and so on.

In short, LAN ports on a router allow Ethernet-ready devices to connect to one another and share data. In order for them to also access the Internet, the router needs to also have a Wide Area Network (WAN) port.

Switch vs. hub: A hub and a switch both add more LAN ports to an existing network. They help increase the number of Ethernet-ready clients that a network can host. The main difference between hubs and switches is a hub uses one shared channel for all of its ports, while a switch has a dedicated channel for each of its ports. This means the more clients you connect to a hub, the slower the data rate gets, whereas with a switch the speed doesn’t change according to the number of connected clients. For this reason, hubs are much cheaper than switches with the same amount of ports.

Hubs are somewhat obsolete now since the price of switches has come down significantly in the last few years. The price of a switch generally varies based on its standard (regular Ethernet or Gigabit, with the latter being more expensive), and the number of ports (the more ports, the higher the cost).

You can find a switch with just four or up to 24 ports (or even more). Note that the total of extra wired clients you can add to a network is equal to the switch’s total number of ports minus one. For example, a four-port switch will add another three clients to the network. This is because you need to use one of the ports to connect the switch itself to the network, which, by the way, also uses another existing network port. With this in mind, make sure you buy a switch with significantly more ports than the amount of clients you intend to add to the network.

WAN port: Generally, a router has just one WAN port. (Some business routers come with dual WAN ports, so one can use two separate Internet services at a time.) On any router, the WAN port is always separate from the LAN ports, and often comes in a different color to distinguish itself. A WAN port is exactly the same as a LAN port, just with a different usage: to connect to an Internet source, such as a broadband modem. The WAN allows the router to connect to the Internet and share that connection with all the Ethernet-ready devices connected to it.

Broadband modem: Often called a DSL modem or cable modem, a broadband modem is a device that bridges the Internet connection from a service provider to a computer or to a router, making the Internet available to consumers. Some providers offer a combo device that’s a combination of a modem and a router, or wireless router, all in one.

Network cables: These are the cables used to connect network devices to a router or a switch. They are also known as Category 5 cables, or CAT5 cables. Currently, most, if not all, CAT5 cables on the market are actually CAT5e, which is capable of delivering Gigabit Ethernet data speeds. The latest network cabling standard currently in use is CAT6, which is designed to be faster and more reliable than CAT5e. The difference between the two is the wiring inside the cable and at both ends of it. CAT5e and CAT6 cables can be used interchangeably and in my personal experience are basically the same, except CAT6 is more expensive. For most home usage, what CAT5e has to offer is more than enough. In fact, you probably won’t notice any difference if you switch to CAT6, but it doesn’t hurt to use CAT6, either, if you can afford it.

Now that we’re clear on wired networks, let’s move on to a wireless network.

2. Wireless networking: Standards and devices
A wireless network is very similar to a wired network with one big difference: devices don’t use cables to connect to the router and one another. Instead, they use wireless connections, known as Wireless Fidelity, or Wi-Fi, which is a friendly name for the 802.11 networking standard supported by the Institute of Electrical and Electronics Engineers (IEEE). This means wireless networking devices don’t need to have ports, but just antennas, which sometimes are hidden inside the device itself. In a typical home network, there are generally both wired and wireless devices, and they can all talk to one another. In order to have a Wi-Fi connection, there needs to be an access point and a Wi-Fi client.

Access point: An Access point (AP) is a central device that broadcasts the Wi-Fi signal for Wi-Fi clients to connect to. Generally, each wireless network, like those you see popping up on your smartphone’s screen as you walk around a big city, belongs to one access point. You can buy an AP separately and connect it to a router or a switch to add Wi-Fi support to a wired network, but generally, you want to buy a wireless router, which is a regular router (one WAN port, four LAN ports, and so on) with a built-in access point. Some routers even come with more than one access point (see dual-band router below).

Wi-Fi client: A Wi-Fi client or WLAN client is a device that can detect the signal broadcast by an access point, connect to it, and maintain the connection. (This type of Wi-Fi connection is established in the Infrastructure mode, but you don’t have to remember this.) Most, if not all, laptops, smartphones, and tablets on the market come with built-in Wi-Fi capability. Those that don’t can be upgraded to that via a USB or PCIe Wi-Fi adapter. Think of a Wi-Fi client as a device that has an invisible network port and an invisible network cable. This metaphorical cable is as long as the range of a Wi-Fi signal.

Wi-Fi range: This is the radius distance an access point’s Wi-Fi signal can reach. Typically, a Wi-Fi network is most viable within about 150 feet from the access point. This distance, however, changes based on the power of the devices involved, the environment, and, most importantly, the Wi-Fi standard. A good Wireless-N access point can offer a range of up to 300 feet or even farther. The Wi-Fi standard also determines how fast a wireless connection can be and is the reason Wi-Fi gets complicated and confusing, especially when the Wi-Fi frequency bands are mentioned, which I just did.

Frequency bands: These bands are the radio frequencies used by the Wi-Fi standards:2.4GHz, 5GHz, and 60Gz. The 2.4GHz band is currently the most popular, meaning, it’s used by most existing network devices. That plus the fact that home appliances, such as cordless phones, also use this band, makes its signal quality generally worse than that of the 5GHz band due to oversaturation and interference. The 60Gz band is used only by the 802.11ad standard (more below).

Depending on the standard, some Wi-Fi devices use one of the two 2.4Gh and 5Ghz bands, while others use both of these and are called dual-band devices. Few devices also support the 60Gh bands to be tri-band devices. Following are the existing Wi-Fi standards, starting with the oldest:

802.11b: This was the first commercialized wireless standard. It offers a top speed of 11Mbps and operates only on the 2.4GHz frequency band. The standard was first available in 1999 and is now totally obsolete; 802.11b clients, however, are still supported by access points of later Wi-Fi standards.

802.11a: Similar to 802.11b in terms of age, 802.11a offers a cap speed of 54Mbps at the expense of much shorter range, and uses the 5GHz band. It’s also now obsolete, though it’s still supported by access points of later standards.

802.11g: Introduced in 2003, the 802.11g standard marked the first time wireless networking was called Wi-Fi. The standard offers the top speed of 54Mbps but operates on the 2.4GHz band, hence offering better range than the 802.11a standard. It’s still used in many mobile devices, such as the iPhone 3G and the iPhone 3Gs. This standard is supported by access points of later standards.

802.11n or Wireless-N: Available since 2009, 802.11n has been the most popular Wi-Fi standard, with lots of improvements over the previous ones, such as making range of the 5GHz band comparable to that of the 2.4GHz band. The standard operates on both 2.4GHz and 5GHz bands and started a new era of dual-band routers, those that come with two access points, one for each band. There are two types of dual-band routers: selectable dual-band routers that can operate in one band at a time, and true dual-band routers that simultaneously offer Wi-Fi signals on both bands.

On each band, the Wireless-N standard is available in three setups: single-streamdual-stream, and three-stream, offering cap speeds of 150Mbps, 300Mbps, and 450Mbps, respectively. This in turns creates three types of true dual-band routers: N600 (each of the two bands offers a 300Mbps speed cap), N750 (one band has a 300Mbps speed cap while the other caps at 450Mbps), and N900 (each of the two bands offers up to 450Mbps cap speed).

 

Note: In order to have a Wi-Fi connection, both the access point (router) and the client need to operate on the same band, either 2.4GHz or 5GHz. For example, a 2.4GHz client, such as an iPhone 4, won’t be able to connect to a 5GHz access point. In case a client supports both bands, it will only use one of the bands to connect to an access point, and when applicable it tends to “prefer” the 5GHz band to the 2.4GHz band, for better performance.

 

802.11ac or 5G Wi-Fi: This latest Wi-Fi standard operates only on the 5GHz frequency band and offers Wi-Fi speeds of up to 1.3Gbps (or 1,300Mbps) when used in the three-stream setup. The standard also comes with dual-stream and single-stream setups that cap at 900Mbps and 450Mbps, respectively. (Note that the single-stream setup of 802.11ac is as fast as the top three-stream setup of 802.11n.)

Currently, there are just a few 802.11ac routers on the market, such as the Netgear R6300, theAsus RT-AC66U, and the Buffalo WZR-D1800H, but it’s predicted that the standard will be increasing popular when hardware devices such as laptops, tablets, and smartphones with built-in 802.11ac become more readily available.

Technically, the 802.11ac standard is about three times faster than then 802.11n (or Wireless-N) standard and therefore is much better for battery life (since it has to work less to deliver the same amount of data). In real-world testing so far, I’ve found that 802.11ac is about twice the speed of Wireless-N, which is very good. (Note that the real-world sustained speeds of wireless standards are always much lower than the theoretical speed cap. This is partly because the cap speed is determined in controlled, interference-free environments.) The fastest real-world speed of an 802.11ac connection I’ve seen so far is 42MBps, provided by the Asus RT-AC66U, which is close to that of a Gigabit Ethernet wired connection.

On the same 5GHz band, 802.11ac devices are backward-compatible with Wireless-N and 802.11a devices. While 802.11ac is not available on the 2.4GHz band, for compatibility purposes, an 802.11ac router will also come with a three-stream (450Mbps) Wireless-N access point. In short, an 802.11ac router is basically an N900 router plus support for 802.11ac on the 5GHz band.

That said, let me restate the rule of thumb one more time: The speed of a network connection is determined by the slowest speed of any of the parties involved. That means if you use an 802.11ac router with an 802.11a client, the connection will cap at 54Mbps. In order to get the top 802.11ac speed, you will need to use a device that’s also 802.11ac-capable.

802.11ad or WiGig: The 802.11ad wireless networking standard just became part of the Wi-Fi ecosystem during CES 2013. Prior to that, it was considered a different type of wireless networking.

802.11ad uses the 60Ghz frequency band to offer a data rate of up to 7Gbps (some seven times the speed of wired Gigabit Ethernet), but has much shorter range (some 30 feet) compared with other Wi-Fi standards. On top of that it generally requires a clear line of sight (no obstacles between devices) to work well.

For this reason, 802.11ad is best used to connect peripheral devices, such as a laptop and a docking station, as in the case of the first tri-band Wi-Fi clients from from Wilocity. Henceforth, there will be more devices and applications that use this Wi-Fi standard. 802.11ad, by itself, is not backward compatible with any existing Wi-Fi standards and is designed not to replace but to coexist with them.

3. More on wireless networking
In wired networking, a connection is established the moment you plug the ends of a network cable into the two respective devices. In wireless networking, it’s more complicated than that.

Since the Wi-Fi signal, broadcast by the access point, is literally in the air, anybody with a Wi-Fi client can connect to it, and that might pose a serious security risk. To prevent this from happening and only let approved clients connect, the Wi-Fi network needs to be password-protected (or in more serious terms: encrypted). Currently, there are a few methods used to protect a Wi-Fi network (called “authentication methods”): WEP, WPA, and WPA 2, with WPA 2 being the most secure, while WEP is becoming obsolete. WPA 2 (as well as WPA) offers two ways to encrypt the signal, which are Temporal Key Integrity Protocol (TKIP) and Advanced Encryption Standard (AES). The former is for compatibility (allowing legacy clients to connect); the latter allows for faster connection speeds and is more secure but only works with newer clients. From the side of the access point or router, the owner can set the password (or encryption key) that clients can use to connect to the Wi-Fi network.

If the above paragraph seems complicated, that’s because Wi-Fi encryption is very complicated. To help make life easier, the Wi-Fi Alliance offers an easier method called Wi-Fi Protected Setup.

Wi-Fi Protected Setup or WPS: Introduced in 2007, Wi-Fi Protected Setup is a standard that makes it easy to establish a secure Wi-Fi network. The most popular implementation of WPS is the push button. Here’s how it works: On the router (access point) side, you press the WPS button. Now, within 2 minutes, you press the WPS button on the Wi-Fi clients, and that’s all you need for them to connect to the access point. This way you don’t have to remember the password (encryption key) or type it in. Note that this method only works with devices that support WPS. Most networking devices released in the last few years do, however.

Wi-Fi Direct: This is a standard that enables Wi-Fi clients to connect to one another without a physical access point. Basically, this allows one Wi-Fi client, such as a smartphone, to turn itself into a “soft” access point and broadcast Wi-Fi signals that other Wi-Fi clients can connect to. This standard is very useful when you want to share an Internet connection. For example, you can connect your laptop’s LAN port to an Internet source, such as in a hotel, and turn its Wi-Fi client into a soft AP. Now other Wi-Fi clients can also access that Internet connection. Wi-Fi Direct is actually most popularly used in smartphones and tablets, where the mobile device shares its cellular Internet connection with other Wi-Fi devices, in a feature called personal hot spot.

4. Power line networking:
When it comes to networking, you probably don’t want to run network cables all over the place, making Wi-Fi a great alternative. Unfortunately, in some places, such as that corner in the basement, a Wi-Fi signal can’t reach, either because it’s too far away or because there are thick concrete walls in between. In this case, the best solution is a pair of power line adapters.

Power line adapters basically turn the electrical wiring of a home into network cables for a computer network. You need at least two power line adapters to form the first power line connection. The first adapter is connected to the router and the second to the Ethernet-ready device at the far end. There are some routers on the market, such as the D-Link DHP-1320, that have built-in support for power line, meaning you can skip the first adapter. More on power line devices can be found here.

Currently there are two main standards for power line networking, HomePlug AV and Powerline AV+ 500. They offer speed caps of 200Mbps and 500Mbps, respectively.

That’s it. Now if you haven’t found your questions answered, send them to me via facebook,Twitter, or just post them in the comments section below. Want to learn more about how to best optimize your Wi-Fi network? Check out part 2.

 

Rule of thumb: The speed of a network connection is determined by the slowest speed of any party involved. For example, in order to have a wired Gigabit Ethernet connection between two computers, both computers, the router they are connected to, and the cables used to link them together all need to support Gigabit Ethernet. If you plug a Gigabit Ethernet device and an Ethernet device to a router, the connection between the two will cap at the speed of Ethernet, which is 100Mbps.

Note: Technically, you can skip an access point and make two Wi-Fi clients connect directly to each other, in the Ad hoc mode. However, similar to the case of the crossover network cable, this is rather complicated and inefficient, and is far less used than the Infrastructure mode.

Source: cnet

http://howto.cnet.com/8301-11310_39-57485724-285/home-networking-explained-heres-the-url-for-you/

Learn how to correctly set up your subwoofers for optimal placement and connectivity.

Merely buying a great subwoofer is no guarantee that you’ll wind up with great bass. There are too many ways to squander its performance potential, and that’s why putting in the extra effort to achieve proper subwoofer setup is crucial. This two-part guide will help you get the best room-shaking bass from your subwoofer.

Subwoofer Setup Part I:
Placement and positioning

ImageWhile a subwoofer’s deep bass is nondirectional, it would be unwise to just stick the sub anywhere that’s convenient in your room.

That’s why it’s worth making an effort to find the best location for your sub; it can make a dramatic difference in the sound. Corner placement is the de facto strategy for most people, possibly because the sub will then be out of the way and almost always produces the most bass, but corner placement may not yield the most accurate bass, or smoothest transition to the speakers. The sub and speakers have to work together as a team, and ideally you should never hear the sub as a separate sound source. All of the bass should appear to come from the speakers.

With small speakers, it’s best to keep the sub within 3 or 4 feet of the front left or right speakers. Once the sub is a lot farther away, it will be harder to maintain the illusion the bass is coming from the speakers. For really small speakers or skinny sound bars, keep the sub as close as possible to the speaker(s).

If you have larger speakers (with 4-inch or larger woofers), some placement experimentation may be useful; play a CD with lots of deep bass and keep repeating the track as you move the sub to all of the visually acceptable locations in your listening room. You’ll be amazed just how different the bass sounds in different locations — some will be muddy, some will sound louder, and some will reduce the bass volume. The goal is to get the best balance of deep bass from the sub and still have the mid and upper bass from the speakers in equal proportions (adjust the subwoofer volume control in each new position). In some rooms, smooth bass response won’t be all that hard to achieve, but I’ve heard my share of “problem” rooms where the bass always sounds boomy or muddy.

If you’re having problems finding the perfect spot, try this method: move your couch or chair out of the way, or into another room, and put the sub in the listening position. Yes, I know that sounds like a crazy idea, but it’s just for test purposes. Now play music and movies with lots of bass, and take a little stroll around your room, stopping in the spots where you’d like to place the sub. As you move about you’ll notice the bass’ apparent loudness and definition changes from place to place. When you find the place that sounds the best, put the sub in that spot.

When all else fails, try placing the sub as close as possible to your couch or chair, with the sub in the “end table” position. That location can work wonders and really improve the sound of your subwoofer.

Larger speakers are generally easier to match with subs; small speakers or speakers with 4-inch or smaller woofers can require more fine-tuning to get right.

Subwoofer Setup Part II:
Connectivity and fine-tuning

A Hsu subwoofer’s rear panel

(Credit: Steve Guttenberg/CNET)

If you have a wireless subwoofer, skip ahead two paragraphs. The Hsu Research subwoofer’s rear panel pictured on the right is fairly typical. To non-audiophiles the maze of connectors can be intimidating, but in most instances the single-cable Sub In connection will be the easiest and best-sounding hookup method. Here you can see the Sub In connection on the Hsu’s rear panel; on other subs the input may be labeled LFE, Direct, or Bypass. Next, turn the sub’s (low-pass) crossover control knob to its maximum, highest numerical setting (you’re going to rely on your AV receiver’s internal crossover control to route the mid and high frequencies to the speakers and the bass to the sub). Turn the volume control halfway up.

If you need a long interconnect or RCA subwoofer cable, I recommend Blue Jeans Cable. How long is long enough? Measure the distance between your AV receiver and sub and remember to include the distances up and down over doorways and furniture. Buying a cable a foot or two too short is a drag, and after you’ve opened the package you may not be able to return it for a refund or exchange.

If your AV receiver has an auto speaker setup program, run the complete setup routine with the calibration microphone that came with the receiver. If you like what you hear, great, you’re done! Then again, don’t be surprised if the sub still doesn’t sound as good as you think it should. I’m not always happy with the subwoofer’s sound after I run these programs. So if you have any doubts, try turning the subwoofer’s volume control up or down. That might be all you need to do. But if you don’t like the change, return to the previous setting or rerun the auto setup to return to your original calibration settings.

If you’re still not satisfied with the sound try using the receiver’s manual speaker setup. If you’re lucky enough to have large floor-standing speakers with 8-inch or larger woofers, you may wish to run them as “large” speakers. But your center and surround speakers will still likely work best run as “small” speakers. On some receivers you’ll be presented with a wide range of subwoofer or crossover settings, from 40Hz up to as high as 250Hz. Your speakers’ or subwoofer’s user manual may offer specific guidance in this area; otherwise use the Audiophiliac’s crossover recommendations: for small speakers with 2- or 3-inch woofers, try settings between 150 and 200Hz; for midsize speakers with 4- or 5-inch woofers, use 80 or 100Hz; and with large bookshelf speakers or skinny floor-standing speakers, try a 60 or 80Hz crossover. When in doubt about the speakers’ sizes, always select “small” on the setup menu.

One of the other controls you may find on your subwoofer’s rear panel is marked “phase.” It’s provided because the speakers and subwoofer sound best when they are in-phase — meaning their woofers move in and out in sync with each other. To check your sub’s phase, play music with lots of bass, listen for a minute or so, and have a friend sitting by the sub flip the sub’s 0/180-degree phase switch slowly back and forth. The correct setting is the one that yields more bass. You may have to try a few different recordings before you hear any difference, and it might help to turn up the sub’s volume level for this test. If you don’t hear any difference between the 0 and 180-degree settings, leave the phase control in the 0 position.

Setting the subwoofer volume is next. Precisely matching the volume levels of the front left, center, right, and surround speakers is important, but subwoofer volume is more subjective. Some folks like to feel the sub working the room all the time — and some prefer to only hear the sub’s contributions with big special-effects-driven movies or dance music. A sound level meter can be a big help when setting speaker levels, but it’s nearly useless for determining the sub’s correct volume level. The “by ear” method works well enough. I can set the sub’s volume level with DVDs in 10 minutes or less, but with CDs I might be fiddling around for days. Again, if you feel like this is all a little too complicated, relax, take a deep breath, run the auto setup program, and let the receiver sort things out.

Source: cnet.com

Washing machines, toilets, cups of tea, foggy weather…these are a few of our favorite things. That is, until they fill the lungs of our cherished cell phone, leaving us weeping over a soggy, lifeless metal carcass.

Dropped your handset in the bath? Fumbled your phone and plopped it in the loo? Don’t panic — just follow these steps and you’ll have a good chance of breathing life back into your drowned smartphone. Just be sure to check out our list below of what not to do for some useful mythbusting.

What to do
While dismantling your phone completely would help it to dry out more effectively, doing so will void your warranty. It usually requires specialist tools and may jeopardize your phone if you’re not careful, so I don’t recommend it. Instead, follow these steps:

1. Firstly, retrieve your handset from the drink right away. A prolonged plunge will increase the risk of damage.

2. Resist the urge to check if it still works or press any buttons, since putting pressure on the keys could shift liquid farther into the device.

3. In all cases, the best thing to do is immediately pull out the battery, thus minimizing power to the device that may cause it to short circuit.

4. If you own a handset with a nonreplaceable battery, like an iPhone or Nokia Lumia, then pulling the battery isn’t an option. You’ll have to risk pressing a few buttons to check if it’s still on and to swiftly turn it off if it is. Take care when handling the phone in this case.

5. Remove any peripherals and attachments on your phone, such as cases.

6. Extract the SIM card and any SD cards it carries, leaving ports or covers on your handset open to aid ventilation.

7. Dry off everything with a towel, including the exterior of your handset, being careful not to let any water drain into openings on the phone.

8. Even when everything’s dry, it’s very likely there’s latent moisture within the device that you’ll want to get out before turning it on. The most oft-reported fix for a sodden phone is to bury the handset in a bowl of dry rice. Desiccant materials, such as rice, have hygroscopic properties that can attract and absorb moisture. You can also use silica gel packs — the kind used in shoe boxes — to greater effect. If you don’t have any lying around, uncooked rice will do nicely.

Place your phone in an airtight container and completely cover it with your choice of desiccant. Leave the container for 24 to 48 hours for the material to draw all the moisture out of your handset. If you feel like splashing out, you can buy silica-lined, hermetically sealed pouches that are specifically designed for the task.

9. When you’re confident it’s dried out, replace the battery and try switching it on. Good luck!

What not to do
A purported fast-track method of drying out a wet phone is to use a hairdryer, or applying heat to the device in other ways. While this would successfully evaporate all the moisture still sitting within the handset, it risks becoming too hot and causing damage to the components.

In cases of severe waterlogging, the steam created may not be able to fully ventilate and would simply condense again elsewhere in the phone. You may get away with it, but it seems rather perilous, so my recommendation is to avoid this method.

Another recurring recommendation is to stick your phone in a freezer, wrapped in paper towel to prevent frost damage. Supposedly, the reduced conductivity of water when close to freezing temperatures will stop your phone from short circuiting when in use.

This is definitely not a long-term solution, however, since as soon as the ice begins to thaw, you’re left with the same, if not exacerbated, problem. In the process you’ll probably mess up your phone’s very fragile screen, which hardly seems worth risking for a short-term fix of dubious effectiveness.

For less-severe dunkings, you may get away with drying your phone thoroughly on the exterior alone, paying special attention to openings like the headphone jack and USB port. To this end, a few have suggested gently poking into them with a toothpick wrapped in paper towel. While jabbing into your phone with a stick is always a bit iffy, the biggest risk is that rags of sodden paper could get stuck inside your phone and play havoc with its innards.

One suggestion is to overcharge the handset so that the build-up of heat is gradual and not excessive, but this carries all the risks you’d expect with running a current through wet circuitry.

Inevitably, someone reading this will wonder if it’s possible to dry out a phone by putting it in the microwave. Please see this for an adept response.

Beware corrosion 
If you succeed in reviving your phone, then congratulations! But you may not have yet won the war with the Grim Reaper of gadgetry. The metal within your phone coming into contact with water and oxygen may create rust that will corrode over time.

While a professional phone fixer may be able to clear out any corrosion by swabbing the circuitry with rubbing alcohol — again, don’t try this at home, kids — in many cases, the eventual demise of your phone is only a matter of time. Sorry.

 

http://howto.cnet.com/8301-11310_39-57494390-285/how-to-save-a-wet-phone-and-what-not-to-do/