Tips that can help Lower Cholesterol Levels

The liver produces most of the cholesterol in our bodies and therefore it is important that we keep to healthy diet of vegetables, fresh fruits and regular exercise on a daily basis. There are two types of cholesterol levels in our bodies, low density lipoproteins (LDL), this are bad cholesterol levels and the high density lipoproteins (HDL) which are the good cholesterol levels. The LDL levels are very harmful to your body for they are the cause of heart attacks and this is due to the fact that they cause blockage to your arteries. The HDL levels mainfunction is to actually reduce the production and absorption of your bad cholesterol.

Individuals battling with high cholesterol can use various cholesterol treatment alternatives, for example statin which is quite popular and usually everyone’s first choice. Although very effective at lowering your LDL levels it does however have severe side effects. It is best to lower the cholesterol naturally by eating foods that are low in cholesterol and using natural cholesterol supplements such as Policosanol found in sugar cane, Theaflavins found in both green black tea and Phytosterols found in plant membranes. Combining a healthy diet with natural supplements will help you combat high cholesterol the safe and most natural way. You do not have to worry about dealing with other ailments that are brought about by cholesterol medication.

Low diet cholesterol menus or recipes are a good start to a healthier lifestyle. Keep to a diet that is enjoyable to eat and at the same time beneficial to your health. Foods to exclude in your daily diet include; red meat which you can substitute for fish or chicken, foods rich in saturated fats should be eliminated completely from your diet or should be consumed in very low quantities and not as often. Healthy food may not be very tasty as fatty foods but they offer the most benefit to your body.

Increase the amount of soluble fiber in your body by eating foods like whole grains, they help with digestion and lower you bad cholesterol by removing the dietary fats. While making meals avoid deep frying your foods to prevent the loss of nutrients and substitute animal oil to vegetable oil when cooking. Avoid eating snacks like potato chips and candy bars instead do some exercise by taking a short walk or jogging for a short distance. If you eat out, go to a restaurant where they serve foods that are low in cholesterol and when shopping for foods always check the labels for cholesterol levels. For an individual who has high cholesterol problems and drinks alcohol or smokes, these habits should be stopped entirely for they accelerate you chances of getting a stroke or heart attack. It is quite simple and easy to maintain your health if you take good care of yourself.

Source from : Lowering Your Cholesterol Naturally: Learn How To Get Started Lowering Your Cholesterol Naturally Today by Amy Austen
Continue Reading

The Causes of High Cholesterol and Ways to Lower it Naturally

High cholesterol is a problem that is caused by the lack of a proper healthy balanced diet, bad eating habits and in some cases it is hereditary. Certain individuals inherit the condition familial hypercholesterolemia, a condition where the cholesterol levels are dangerously high. In other cases, the individuals tend to eat a lot of foods that contain high cholesterol or a lot of saturated fats which will basically increase your low density lipoproteins levels (LDL), that is,  your bad cholesterol which is a health risk that causes a blockage to your arteries and subsequently causing heart disease among other ailments.

To maintain normal cholesterol levels and improve your health, it is advisable to eat foods that are low in cholesterol and to exercise regularly. Foods like fresh fruits, vegetables, nuts, fish, chicken, lean meat should be incorporated into your daily diet as they are low in cholesterol and will increase your high density lipoprotein levels (HDL), your good cholesterol whose main function is to reduce the LDL levels in your body by excretion.

Taking regular walks, or signing up in a gym are just one of the ways you can exercise not necessarily on a daily basis but as often as you can preferably thrice or four times in a week but for an individual who is trying to combat the problem of high cholesterol it is best to exercise on a daily basis for an hour or two.

It is hard sometimes to avoid some of your favorite foods like candy bars and chips, but you need to realize that your main focus is to lower your cholesterol levels in your body and more so to ensure that your cholesterol ratio is good, which is your LDL to HDL ratio. Total cholesterol figures do not indicate your individual good and bad cholesterol levels so they are not very accurate in making a diagnosis. Doctors claim that a good cholesterol ratio is 3:1 and to maintain normal cholesterol levels consume not more than 200mg of cholesterol a day.

The liver produces most of the cholesterol in your body therefore with a diet change it might not really do the trick and it is always best to combine it with natural cholesterol supplements that contain Phytosterols, Policosanol and DLimonene that will help combat the bad cholesterol by reducing its production and absorption while simultaneously increasing your good cholesterol levels.

Living a healthy lifestyle should be a priority to anyone who values life. It is simple to keep to a healthy diet and regular exercise and one should not wait to have suffered a heart attack caused by high cholesterol to change their lifestyle.

Source from : Lowering Your Cholesterol Naturally: Learn How To Get Started Lowering Your Cholesterol Naturally Today by Amy Austen
Continue Reading

The Economic Impact of Mobile IP

The economic impact of Mobile IP, the standard that allows IP sessions to be maintained even when switching between different cells or networks, has been nothing short of staggering in terms of both scale and acceleration. As noted, the first smartphones appeared around 2007. Their success quickly led to a proliferation of smart devices. Industry analysts predict that by the end of 2015, there will be more than 2 billion smart devices in service, with a market value of more than $700 billion. These devices will drive an applications market expected to be worth another $25 billion in the same period.

As remarkable as this growth is, it's dwarfed by the growth in data usage. According to studies, data usage grew by an average of 400 percent per year between 2005 and 2010 in the U.S. and 350 percent per year in Western Europe. It's instructive to illustrate examples with numbers. Figure 3-8 shows the total mobile data usage on a per-month basis. "Petabytes" is a hard number to understand, however. For an individual, it means that if the average monthly mobile data usage was 20 MB in 2005 (which was a lot back then, and would have been quite expensive), the same average user would consume 20 GB per month in 2010—a mind-bending 100,000 percent increase.

The other sea change is that by the end of 2014, tablets would exceed PCs in total units sold. This signifies not just a mobile capability, but an expectation of mobility by consumers. This "new normal" affects the entire technology ecosystem. The obvious players affected are the tablet providers and their parts suppliers, who continue to push the limits of performance and miniaturization. This is only the tip of a very large iceberg, however. Under the water line are massive cascading implications for mobile carriers, data suppliers, and their providers.

For the carriers, the amount of data consumed over mobile connections far exceeds even the boldest predictions made 10 or even 5 years ago. What's more, the rate of data consumption seems to be accelerating. Mobile providers have been scrambling to keep up with demand, which has boosted subscription rates and driven a great deal of innovation in the areas of compression, streaming, caching, and other data-delivery efficiencies. Interestingly, though, mobile access is beginning to show signs of commoditization, with some providers now giving away data that used to generate lucrative data plans. For example, one major carrier now offers unlimited music streaming outside the data plan. This is a great way to capture a group of users (mostly teens and young adults) who represent potential lifelong customers.

Data providers have also seen incredible growth, and are rapidly becoming media creators in addition to hosting media from other sources. With users now expecting high-performance data over mobile connections, data providers have been compelled to build massive, high-performance data centers in many regions to ensure customer satisfaction. This has proven to be an economic boon for switch and equipment providers, as well as to the economies of many small rural markets where the data centers are built. Just 20 years ago, many considered the availability of downloadable music to be just short of a miracle, even though it took 56 minutes per song. Today, kids complain if the high-definition movie they are watching on their phones (from the back seat of a car traveling 70 miles per hour) buffers for more than 10 seconds. Clearly, the world has changed.

Unfortunately, all the life-changing benefits of high-speed mobile data come with a significant security risk. As more and more facets of our personal lives have an associated mobile app, more and more personal data will end up on people's phones. This is a gold mine for would-be thieves, who are way ahead of the average unwitting mobile handset user. For cybercriminals who have honed their skills against trained IT adversaries, the average person who may or may not know anything at all about cybersecurity is no match at all. For the IT security specialist, this would be nothing more than a cautionary tale—except for the fact that many of these same unwitting users have access to corporate servers.

Most big-city tourists worry about pickpockets taking their wallet, which might contain some cash, a few credit cards, and a picture ID. These same people, however, often fail to consider that if their phone or device were compromised, they could find all their credit cards run up, their bank accounts cleared, and new credit cards issued in their name and maxed out as well. For good measure, the phone might then be sold to a third party on a cybercrime version of eBay (which not only exists, but even has holiday sales) to someone who might then use it to breach the victim's company. This may seem far-fetched, but it's all within the realm of the possible.

Taken from : Wireless and Mobile Device Security
Continue Reading

BYOD and the BlackBerry Effect

One could make the argument that the company Research in Motion (RIM) Ltd., later called BlackBerry Limited, first opened the door through which BYOD charged. BlackBerry got two things right, which led to its meteoric rise. Interestingly, one of those same things led to the company's subsequent decline.

The first thing BlackBerry got right was the development of the BlackBerry Enterprise Server (BES) in 1999. The BES enabled BlackBerry devices to receive "push" e-mails from Microsoft Exchange Servers, which meant that users could send and receive e-mails no matter where they were (assuming they had cell coverage, which by then was nearly everywhere).

The second thing BlackBerry got right was to focus its sales effort on IT departments rather than on individual consumers. This was a brilliant move, because at the time, to receive push e-mails from a Microsoft Exchange server, all but the most technical users needed IT support. This put IT in control—which is exactly how IT likes it.

More to the point, BlackBerry designed its product to suit its customers' wants and needs—which in the case of IT meant easy integration, broad control capability, and decent security (although there were some security issues). The strategy worked brilliantly. By 2010, BlackBerry boasted 36 million users worldwide. However, many people point to this strategy of selling to IT as the root cause of BlackBerry's subsequent rapid decline.

In 2007, Apple introduced the iPhone, the first of the so-called smartphones. The Android phone quickly followed. Both of these devices (along with others) could also receive push e-mails from Microsoft Exchange servers. Where they differed was their focus on consumer satisfaction and, in the case of the iPhone, on individual prestige. Even the initial launch of the iPhone, which supported no third-party apps, was touted as a BlackBerry killer. With the release of the iPhone 2 in 2008 and its ability to run third-party applications (along with the unveiling of the App Store), the end was near for BlackBerry.

By this time, it was a relatively simple matter to connect to a Microsoft Exchange server without a lot of help from IT. And while many IT departments had a strong preference for BlackBerry standardization, more and more people began showing up at work with iPhones and Android phones. A small but vocal minority pushed to allow third-party devices. If they were told no, many simply did it anyway. As the number of consumer-oriented devices grew, IT was forced to support them.

In the context of this chapter, the real takeaway is that more than any other company, BlackBerry got companies and government organizations accustomed to the idea of employees having mobile devices, giving them near 24/7 access to e-mail no matter where they went. Up to this point, wireless technology had blurred the line between work and not work, but that just meant you could use wireless to connect, shut down and move, and then reconnect. BlackBerry was truly mobile, meaning you could stay connected even as you traveled from place to place. Now workers could (and did) check and respond to e-mail all the time—at dinner, at their kid's soccer game, and (unfortunately) while driving. With this newfound connectedness, the line between work and not work was all but erased.

Many critics of BlackBerry point to it as cautionary tale of a company that failed to adapt. But few can deny that BlackBerry changed not only how people work, but also the relationship between companies and employees to a degree not seen since the industrial revolution. It also—unintentionally— opened a new front in the in the battle for IT security.

Taken from : Wireless and Mobile Device Security
Continue Reading

The Evolution of Mobile Networks

Mobile phone technology has been available to consumers for only 30 years, but there have been some amazing advancements in that time. Since the first limited commercial rollout in 1983, there have been four distinct generations of technology. These have gone from basic radio communication with a limited connection range and poor quality voice to smartphones capable of managing high-quality voice while taking and sending a 7-megapixel picture with no noticeable drop in quality. This section reviews each generation of cell phone technology, looking at what it was, how it worked, and what the security implications were and are.

A commercial cellular system, called the Advanced Mobile Phone System (AMPS), was deployed in the North America in 1993. AMPS used analog signals to connect to cell towers, using FDMA for channel assignment. AMPS succeeded where previous attempts to create a commercial cellular service failed because of its ability to reuse frequencies (FDMA) and to hand off calls between cells in a relatively seamless way that did not involve the user.

The AMPS system was a commercial success despite serious performance issues. Call quality and reliability were nowhere near that of the PSTN, which limited its usefulness. In addition, FDMA, while considered a breakthrough, still consumed a lot of bandwidth per channel, which limited capacity. AMPS calls were also unencrypted, making it possible to eavesdrop on a call using a scanner. Finally, AMPS phones were relatively easy to clone, allowing non-subscribers to gain access to the service.

Although much-improved second-generation technology soon came along, carriers continued to support AMPS phones until 2002, when the older technology was finally phased out.

The big change from 1G to 2G was the conversion from analog to digital. Initially referred to as Digital Advanced Mobile Phone System (D-AMPS), 2G cellular phones and networks used TDMA, which greatly improved bandwidth efficiency and subscriber capacity.

Unlike AMPS, which was essentially the same everywhere it was deployed, two distinct systems emerged for D-AMPS. The first of these was a TDMA-based second-generation technology developed in the late 1980s by an industry consortium consisting mostly of European companies. This technology was called Groupe Spécial Mobile (GSM), although its name was later changed to Global System for Mobile (GSM). The use of GSM was mandated throughout Europe to ensure continent-wide compatibility between countries.
The second major 2G technology was CDMA, which refers to both the cellular system and the method of subscriber access. CDMA was the dominant 2G system used in the U.S. While CDMA and GSM were not compatible, dual-system phones were eventually developed that could operate on either system.

In addition to offering more efficient use of bandwidth, 2G systems also used encryption, which greatly improved security. One of the downsides, however, was that the lower power requirements of digital systems meant that coverage was often poor outside populated areas, which had greater cell density. Another problem with digital was that unlike an analog signal, which degrades in a linear way, digital signals drop off completely when the signal strength falls below a certain threshold. When it's good, digital quality can be very good. But when it's bad, it's essentially unusable.

This 2G technology was the precursor to mobile data networks. The first of these was used for Short Message Service (SMS), which introduced the world to texting. At first, SMS did not seem like a compelling feature. But its use exploded with teens and young adults to the point where many used their phones only for texting. Eventually, subscription plans were created to accommodate these users.

Although GSM and CDMA were digital technologies and took advantage of multiple access techniques, both were still circuit-switched technologies, much in the way the PSTN was. General Packet Radio Service (GPRS) was the first packet-switching technology method that allowed data sharing over mobile networks. Still considered a 2G technology but often called 2 + or 2.5G, GPRS allowed access to some Web sites—although data rates proved to be too slow for what was becoming a growing need and expectation.

EDGE, which AT&T rolled out in 2003, and which other carriers quickly offered, represented an enhancement over GPRS. It offered high data rates through better data encoding and (at that time) viable data access to many Web sites.

3G Technology
The third generation of mobile technology, called 3G, was the first generation specifically designed to accommodate both voice and data. Based on the International Mobile Telecommunications-2000 (IMT-2000) standards set by the International Telecommunications Union (ITU), 3G can accommodate voice, data, and video.

The first 3G system was rolled out in Japan in 2001. In 2002, it was rolled out in many other parts of the world, including the U.S. and the European Union. Implementation of 3G took longer than anticipated, however. This was in large part due to the need for expanded frequency licensing to accommodate higher bandwidth needs and rapidly increasing subscriber rates. By the end of 2007, however, there were 190 3G systems online in more than 40 countries worldwide.

The most noticeable improvement in 3G was its high-speed data rates. One enhancement to 3G was a mobile protocol called High Speed Downlink Packet Access (HSDPA), which improved data rates to an impressive 14 Mbps. For the first time, the streaming of music and video to mobile devices was supported. Responding to this capability, many content providers created streaming offerings that catered specifically to mobile users.

In addition to the security benefits of 2G, such as encryption, 3G systems also allowed for network authentication, which ensured that users connected to the correct network. On the negative side, smartphones that attached to 3G networks had far more personal-data capabilities—for example, access to bank accounts—as well as access to corporate systems and applications. With the growth in the number of users and an increase in the types of opportunities to exploit, 3G systems and smartphones soon attracted the attention of cybercriminals.

4G and LTE
As of this writing, mobile telephony is in its fourth generation, called 4G, while the fifth generation, called 5G, is in development. Among other improvements, 4G is an all-IP network, allowing the use of ultra broadband and the promise of 1 Gbps data rates. At that throughput level, voice communications can be converted to Voice over IP (VoIP) with high quality, high-definition TV can be streamed to mobile devices, and a host of live interactive gaming applications can be enjoyed.

The two systems currently deployed for 4G are Mobile Worldwide Interoperability for Microwave Access (WiMAX) and Long Term Evolution (LTE). The standards for 4G were developed by the ITU as the International Mobile Telecommunications Advanced (IMT-Advanced) specification. 4G also supports IPv6, which is especially important given the growth of smart devices.

An important change in 4G is the authentication method used. Previous systems used a signaling system called Signaling System 7 (SS7) to set up calls and mobile data sessions. In contrast, 4G uses a signaling protocol called Diameter. Some critics say Diameter sessions are potentially open to hijacking or having users' personal information exposed, making it a less-than-ideal replacement for SS7. In addition, the fact that 4G is an all-IP network opens it up to all the Internet's known security issues. Given the vast amounts of private, personal information, as well as company information, stored on or captured from mobile devices, this represents a significant security vulnerability for both individuals and businesses.

Taken from : Wireless and Mobile Device Security
Continue Reading

From Wired to Wireless

The networking industry began to grow in the 1980s and exploded in the 1990s with the culmination of affordable personal computers and growing popularity of the Internet and the World Wide Web. Already on an incredible trajectory in the late 1990s, the industry benefited from another incredible boost in the form of wireless networking—and there was another even bigger boost to come.

This "second wave" of networking was initially made possible as a result of a groundbreaking decision by the U.S. regulatory body in charge of telecommunication rules, the Federal Communications Commission (FCC)—the opening of several bands (contiguous ranges of radio frequencies) of the radio spectrum for unlicensed use in 1985. This was a big change, given that apart from ham radio, which was valued as a nationwide emergency communication system, the radio spectrum was a tightly controlled government asset that required licensed approval for use. This visionary decision (not a phrase often associated with a government regulatory body) had a profound effect on networking as well as on several other industries.

The frequency bands in question—900 MHz, 2.4 GHz, and 5.8GHz—had previously been reserved for things such as microwave ovens, among others. The FCC's decision allowed anyone to use these bands (or any company to build a product that used these bands) as long as they managed interference with other devices. This made products such as cordless phones and remote-controlled ceiling fans—and, soon, wireless networks—possible.

At first glance, it's hard to see exactly why wireless had such a huge impact. At the time, wireless performance was not that great. In fact, compared to a hard-wired Ethernet connection, it was pretty lousy. As it turned out, though, users were far more interested in convenience than performance—at least initially. Before the advent of WLAN, if you wanted to connect to a network, you had to go to where the computer was tethered to an Ethernet port. Or, if you had a laptop (these were also becoming cheaper), you had to go where the connection port was. This may not seem like a big deal, but "going to the computer" meant leaving where you were and dropping what you were doing.

Wireless networking changed all that. With WLAN, you brought your computer to where you wanted to be and connected to the network from there. The ability to connect in a meeting room or on your couch in front the TV far outweighed the slower connection speed, especially since there were very few high-speed network applications at the time. (Streaming media meant waiting 5 or 10 minutes to download one song, for example.) This convenience factor created a massive surge in WLAN usage. In response, manufacturers poured millions into research and development (R&D), which improved performance, which in turn attracted more users.

The first generation of WLAN operated at about 500 kilobits per second (Kbps) on an unlicensed frequency band. In this case, "unlicensed" meant that anyone could use it. It was not restricted or reserved for commercial or government use as long as the transmission power was kept low. The second generation boosted performance to 2 megabits per second (Mbps), a 400-percent improvement. (Note that the term "generation" is used here in the generic sense rather than as a name, as it is when describing mobile network technology.)

In 1990, the IEEE established a working group to create a standard for WLANs. In 1997, the IEEE 802.11 standard was ratified, specifying the use of the 2.4 GHz band with data rates of up to 2 Gbps. Different versions of the 802.11 standard were developed in subsequent years and were noted via extensions such as a, b, g, and n. Notationally, this would appear as "802.11b," for example.

Taken from : Wireless and Mobile Device Security
Continue Reading

Mobile IP Security

Until the latter part of the first decade of the 21st century, mobile phones were predominantly voice only. Yes, people could use them to send text messages and download ringtones and wallpapers, but that was about it. There was only limited support for data—and at rates and throughput that were prohibitive.
The advent of 3G networks made high-quality Internet access from a mobile device a reality. Such was the rush for mobile Internet communications that data traffic now supersedes voice traffic on telecom providers' networks. With data networking as the new cash cow, telecom providers have focused their strategy on delivering high-speed data transport and services.

Unlike previous devices, however, these new 3G devices could not be locked down. Now the owners of these devices could download applications that used any nearby WLAN to send and receive data, bypassing the telecom operators' expensive data plans. Providing free WLAN access became a great cheap marketing tool; WLAN hotspots have since sprung up in shopping malls and leisure areas across the country.

Unfortunately, cybercriminals were not far behind. They found a vast array of new victims congregated in these areas—particularly teenagers. These kids quickly discovered the joy of instant messaging and other communications over a free WLAN infrastructure, thereby becoming easy targets.

This was mostly because phone manufacturers made instant and easy access a higher priority than basic security. As such, smartphones and tablets were enabled for Bluetooth discovery out-of-the-box. The criminal element rejoiced, as they now had direct access to these devices, which they could surreptitiously use to make voice calls, send data, listen to or transfer calls, gain Internet access, and even transfer money. The full menu of Bluetooth attacks was at the attacker's fingertips, including bluesnarfing (in which an attacker gains access to the contacts and data stored on the phone and redirects incoming calls) and bluejacking (in which an attacker sends unsolicited messages to other Bluetooth devices). These types of attacks were common on 2G mobile devices with Bluetooth in 2002 and 2003.

Today, mobile phones are no longer shipped with Bluetooth enabled in discovery mode. In addition, security has been hardened to prevent unauthorized connections and remote access to the phone's features. Confidence is such that smartphones are trusted for use in e-banking, e-commerce, and e-mail. Despite these improvements in securing wireless mobile devices and the underlying radio networks, however, there is no room for complacency. Cybercriminals, who have become adept at intercepting signals over unencrypted wireless networks, are never far behind.
Continue Reading