Tuesday, February 24, 2009
"OnDemand Online" will be available free to Comcast cable TV subscribers, Business Insider reports. That means only customers inside Comcast's cable zone can sign up, but the service itself is available anywhere. For example, a user could still watch on demand videos away from home (the company is working on a way to verify subscriptions). Also of note, the service would count against Comcast's 250 GB monthly bandwidth cap.
The major difference between this service and Hulu is content. While Hulu's videos come from NBC, Fox and their cable channels like FX, Comcast is inking deals with other cable networks, possibly providing a streaming opportunity for channels like the Food Network and Discovery. In other words, Comcast is going to focus on content that isn't already online.
Because Comcast and other cable providers pay fees that account for roughly half of cable channels' revenue, it's in the channels' best interests to keep providers healthy, Business Insider notes. By comparison, hardly any money comes in through online viewing, so its more likely for a channel to give its content to Comcast than to an online-only service like Hulu.
U.S. cable, programmers set for Web TV by summer,,,
Cable and satellite TV providers are working on a free online video service to deliver up-to-date cable shows to computers and mobile phones, but the industry is worried the project could cannibalize pay-TV's long-standing revenue model.
cable network programing is available primarily on cable and satellite TV services, such as Comcast Corp (CMCSA.O) and DirecTV Group Inc (DTV.O), or nascent video services from phone companies.
This is about bringing new amounts of content to the Internet in a business model that continues to support the creation of that content," said Sam Schwartz, executive vice president of Comcast Interactive Media.
Comcast is leading talks with programmers like Viacom Inc (VIAb.N) and Discovery Communications Inc (DISCA.O), with Time Warner Cable (TWC.N), DirecTV and others involved. Their plans are at different stages, and cable operators will likely discuss putting cable programing online at an industry meeting this week according to people familiar with the plans.
The project would let cable and satellite TV subscribers watch up-to-date cable shows on the Web, and possibly on mobile phones, for free possibly as soon as this summer, the sources said.
The idea is to give customers added flexibility to view their favorite shows. It is also seen as a preemptive strike against possible 'cord-cutting' of video services, particularly by younger subscribers used to watching other programs online.
But the project presents a number of business and technology challenges to both operators and programmers.
Cable programmers like Viacom's MTV Networks make money from advertising sales, as well as affiliate fees that cable and satellite TV service providers pay.
Whatever business models are agreed upon will depend to some extent on overcoming technological challenges.
One involves identifying which customers have the right to view a show, and managing digital rights to avoid over-wide distribution. There is also the need to accurately 'time' the content so it is available to users for a restricted period -- so as not to jeopardize other media content distribution systems such as video on demand and DVD releases.
Yet executives also acknowledge the risk of ignoring the Web, as seen by the music and newspaper industries that have suffered as consumers change their media consumption habits.
Comcast sees the project, which it calls On Demand Online, as a natural progression from digital video recorders and video-on-demand channels.
It is working on technology to authenticate subscribers who go to Comcast's Fancast and Comcast.net websites for video. This would effectively create a "wall" behind which programmers might feel comfortable keeping some of their premium shows.
Sunday, February 22, 2009
Working offline can come with an unexpected risk
A security expert has sounded a warning on features that allow offline access to websites. so that they can use services like web-based e-mail when not online.
Be cautious when you get an email that says "there's a problem with your password, click on this link and we'll fix it"
But sites with poor security that use the feature put their visitors at risk of being robbed of their data.
Michael Sutton disclosed the threat at the Black Hat security conference in Washington, DC.
Offline web applications are taking off because of services such as Gears, developed by Google, and HTML 5, a new HTML specification that is still in draft form.
It was introduced to many web users in January, when Gmail introduced a Gears-powered offline mode. Offline Gmail lets users read and write e-mail when they're not connected to the internet.
Mr Sutton stressed that Gmail, Gears and HTML 5 are considered secure, but websites that implement offline features without proper security could put users at risk.
"You can take this great, cool secure technology, but if you implement it on an insecure website, you're exposing it. And then all that security is for naught."
Mr Sutton found that websites which suffer from a well-known security vulnerability known as cross-site scripting are at risk.
A hacker could direct a victim to a vulnerable website and then cause the user's own browser to grab data from their offline database.
Unlike phishing, the whole attack could take place on a reputable site, which makes it harder to detect.
As a proof of concept, Mr Sutton was able to swipe information from the offline version of a time-tracking website called Paymo. Mr Sutton alerted Paymo and it fixed the vulnerability immediately.
Web developers must ensure that their sites are secure before implementing offline applications, said Mr Sutton.
"Gears is fantastic and Google has done a great job of making it a secure technology. But if you slap that technology into an already vulnerable site, you're leaving your customers at risk," he explained.
Security expert Craig Balding agreed that it was up to developers to secure their sites, as the line between desktop applications and web applications becomes more blurred.
"Every website wants to keep up in terms of features, but when developers turn to technologies like this they need to understand the pros and cons," he told BBC News.
Saturday, February 21, 2009
Hackers are exploiting an unpatched security hole in current versions of Adobe Reader and Acrobat to install malicious software when users open a booby-trapped PDF file, security experts warn.
These types of attacks are frequently the most damaging and it is only a matter of time before this exploit ends up in every exploit pack on the Internet," Shadowserver volunteer Steven Adair wrote on the group's blog.
Then the form will also calculate your height as 185 cm (centimeters) for those using metric.
A better example is an order form. You indicate which things you want to purchase, and the form automatically sums the total amount of the purchases.
Friday, February 20, 2009
Halo 3 players are a popular target for the Xbox attacks
Hackers target Xbox Live players.The booting services are proving popular with players who want a way to get revenge on those who beat them in an Xbox Live game.
The attackers are employing data flooding tools that have been used against websites for many years.
Microsoft is "investigating" the use of the tools and said those caught using them would be banned from Xbox Live.
"There's been a definite increase in the amount of people talking about and distributing these things over the last three to four weeks," said Chris Boyd, director of malware research at Facetime Communications.
"The smart thing about these Xbox tools is that they do not attack the Xbox Live network itself," he said.
He said the tools work by exploiting the way that the Xbox Live network is set up. Game consoles connecting to the Xbox network send data via the net, and for that it needs an IP address.
Even better, said Mr Boyd, games played via Xbox Live are not hosted on private servers.
"Instead," he said, "a lot of games on Xbox Live are hosted by players."
If hackers can discover the IP address of whoever is hosting a game they can employ many of the attacks that have been used for years against websites, said Mr Boyd.
One of the most popular for the Xbox Live specialists is the Denial of Service attack which floods an IP address with vast amounts of data.
The flood of data is generated by a group of hijacked home computers, a botnet, that have fallen under the control of a malicious hacking group.
When turned against a website this flood of traffic can overwhelm it or make it unresponsive to legitimate visitors.
When turned against an Xbox owner, it can mean they cannot connect to the Live network and effectively throws them out of the game.
"They get your IP address, put it in the booter tool and they attempt to flood the port that uses Xbox traffic," said Mr Boyd. "Flooding that port prevents any traffic getting out."
The hard part, he said, was discovering a particular gamer's IP address but many malicious hackers had honed the skills needed to find them.
Some interconnect their PC and Xbox and use packet sniffing software to hunt through the traffic flowing in and out of the console for IP addresses.
Others simply use con tricks to get the target to reveal their net address.
The technical knowledge needed to hunt down IP addresses was quite high, said Mr Boyd, but many of those who had the skills were selling their expertise to those keen to hit back at their rivals on the Xbox Live network.
For $20 (£13) some Xbox Live hackers will remotely access a customer's PC and set up the whole system so it can be run any time they need it.
Some offer low rates to add compromised machines to a botnet and increase the amount of data flooding a particular IP address.
Defending against the attack could be tricky, said Mr Boyd: "There's no real easy solution to this one."
Although IP addresses regularly change, people could find it takes hours or days for their ISP to move them on to a new one.
In response to the rise in attacks, Microsoft said: "We are investigating reports involving the use of malicious software tools that an attacker could use to try and disrupt an Xbox LIVE player's internet connection."
It added: "This problem is not related to the Xbox Live service, but to the player's internet connection. The attacker could also attempt [to] disrupt other internet activities, such as streaming video or web browsing, using the same tools.
It urged anyone falling victim to such an attack to contact their ISP to report it and get help fixing it.
In January 2009 Microsoft announced that Xbox Live had more than 17m members.
5 test updates to PCs running the Windows 7 Beta (Build 7000) via Windows Update. These updates allow us to test and verify our ability to deliver and manage the updating of Windows 7. We typically verify servicing scenarios during a beta.
Next week Windows 7 beta users will get a variety of updates, only they aren't really updates.
Instead, Microsoft said in will sending the patches to test the operating system's updating mechanism.
The company stressed the updates won't actually add new features or update anything.
Brandon LeBlanc explained the nature of the updates in a blog posting .
This is not something we will support in Windows 7. We've talked about and shown a great many "personalization" elements of Windows 7 already, such as the new themepacks which you can try out in the beta. The reasons for this should be pretty clear, which is that we cannot guarantee the security of the system to allow for arbitrary elements to be loaded into memory at boot time. In the early stages of starting Windows, the system needs to be locked down and execute along a very carefully monitored and known state as tools such as firewalls and anti-virus checking are not yet available to secure the system. And of course, even though we're sure everyone would follow the requirements around image size, content, etc. due to performance we would not want to build in all the code necessary to guarantee that all third parties would be doing so.
Most should not be surprised about this decision, not only because of the security and performance concerns, but because Microsoft has not supported customizing boot screens on its previous Windows operating systems.
Many of you might be asking if you could include your own animation or customize this sequence. This is not something we will support in Windows 7. We’ve talked about and shown a great many “personalization” elements of Windows 7 already, such as the new themepacks which you can try out in the beta. The reasons for this should be pretty clear, which is that we cannot guarantee the security of the system to allow for arbitrary elements to be loaded into memory at boot time. In the early stages of starting Windows, the system needs to be locked down and execute along a very carefully monitored and known state as tools such as firewalls and anti-virus checking are not yet available to secure the system. And of course, even though we’re sure everyone would follow the requirements around image size, content, etc. due to performance we would not want to build in all the code necessary to guarantee that all third parties would be doing so. One of our design goals of Windows 7 was around making sure there are ample opportunities to express yourself and to make sure your PC is really your PC and so we hope that you’ll understand why this element is one we need to maintain consistently.
This was a quick behind the scenes look at something that we hope you enjoy. With Windows 7 we set out to make the experience of starting a Windows PC a little more enjoyable, and from the feedback we’ve seen here and in other forums, we think we’re heading in the right direction. In addition to our efforts to make boot fast, we also have a goal to make the system robust enough, such that most of you will not see this new boot animation that often and when you do it will be both enjoyable and fast!
Wednesday, February 18, 2009
Duke University and the University of Massachusetts have created a unique set of conditions in which tiny particles within a solution will consistently assemble themselves into these and other complex shapes.
By manipulating the magnetization of a liquid solution, the researchers have for the first time coaxed magnetic and non-magnetic materials to form intricate nano-structures. The resulting structures can be "fixed," meaning they can be permanently linked together. This raises the possibility of using these structures as basic building blocks for such diverse applications as advanced optics, cloaking devices, data storage and bioengineering.
Changing the levels of magnetization of the fluid controls how the particles are attracted to or repelled by each other. By appropriately tuning these interactions, the magnetic and non-magnetic particles form around each other much like a snowflake forms around a microscopic dust particle.
"We have demonstrated that subtle changes in the magnetization of a fluid can create an environment where a mixture of different particles will self-assemble into complex superstructures," said Randall Erb, fourth-year graduate student. He performed these experiments in conjunction with another graduate student Hui Son, in the laboratory of Benjamin Yellen, assistant professor of mechanical engineering and materials science and lead member of the research team.
The results of the Duke experiments appear in Feb. 19 issue of the journal Nature.
The nano-structures are formed inside a liquid known as a ferrofluid, which is a solution consisting of suspensions of nanoparticles composed of iron-containing compounds. One of the unique properties of these fluids is that they become highly magnetized in the presence of external magnetic fields. The unique ferrofluids used in these experiments were developed with colleagues Bappaditya Samanta and Vincent Rotello at the University of Massachusetts.
"The key to the assembly of these nano-structures is to fine-tune the interactions between positively and negatively magnetized particles," Erb said. "This is achieved through varying the concentration of ferrofluid particles in the solution. The Saturn and flower shapes are just the first published examples of a range of potential structures that can be formed using this technique."
According to Yellen, researchers have long been able to create tiny structures made up of a single particle type, but the demonstration of sophisticated structures assembling in solutions containing multiple types of particles has never before been achieved. The complexity of these nano-structures determines how they can ultimately be used.
"It appears that a rich variety of different particle structures are possible by changing the size, type and or degree of magnetism of the particles," Yellen said.
Yellen foresees the use of these nano-structures in advanced optical devices, such as sensors, where different nano-structures could be designed to possess custom-made optical properties. Yellen also envisions that rings composed of metal particles could be used for antenna designs, and perhaps as one of the key components in the construction of materials that display artificial "optical magnetism" and negative magnetic permeability.
In the Duke experiments, the nano-structures were created by applying a uniform magnetic field to a liquid containing various types of magnetic and non-magnetic colloidal particles contained between transparent glass slides to enable real-time microscopic observations of the assembly process. Because of the unique nature of this "bulk" assembly technique, Yellen believes that the process could easily be scaled up to create large quantities of custom-designed nano-structures in high-volume reaction vessels. However, the trick is to also be able to glue the structures together, because they will fall apart when the external field is turned off, he said.
"The magnetic forces assembling these particles are reversible," Yellen said. "We were able to lock these nano-structures in their intended shapes both by using chemical glues and by simple heating."
The Duke team plans to test different combinations of particles and ferrofluids developed by the University of Massachusetts team to create new types of nano-structures. They also want to try to make even smaller nano-structures to find the limitations of the assembly process, and study the interesting optical properties which are expected from these structures.
The Ethics of Nanotechnology
What kind of world do we wish to inhabit and leave for following generations? Our planet is in trouble if current trends continue into the future: environmental degradation, extinction of species, rampant diseases, chronic warfare, poverty, starvation and social injustice.
Are suffering and despair humanity's fate? Not necessarily. We have within our grasp the technology to help bring about great progress in elevating humanity. Or we can use our evolving knowledge for destructive ends. We are already immersed in fiery debates on genetic engineering, cloning, nuclear physics and the science of warfare. Nanotechnology, with its staggering implications, will create a whole new set of ethical quandaries. A strong set of operating principles is needed -- standards by which we can guide ourselves to a healthier destiny.
The following are some ethical guidelines gleaned from both Foresight and our own philosophy and experience in this field:
* Nanotechnology's highest and best use should be to create a world of abundance where no one is lacking for their basic needs. Those needs include adequate food, safe water, a clean environment, housing, medical care, education, public safety, fair labor, unrestricted travel, artistic expression and freedom from fear and oppression.
* High priority must be given to the efficient and economical global distribution of the products and services created by nanotechnology. We recognize the need for reasonable return on investment, but we must also recognize that our planet is small and we all depend upon each other for safety, stability, even survival.
* Military research and applications of nanotechnology must be limited to defense and security systems, and not for political purposes or aggression. And any government-funded research that generates useful non-military technological advances must be made available to the public.
* Scientists developing and experimenting with nanotechnology must have a solid grounding in ecology and public safety, or have someone on their team who does. Scientists and their organizations must also be held accountable for the willful, fraudulent or irresponsible misuse of the science.
* All published research and discussion of nanotechnology should be accurate as possible, adhere to the scientific method, and give due credit to sources. Labeling of products should be clear and accurate, and promotion of services, including consulting, should disclose any conflicts of interest.
* Published debates over nanotechnology, including chat room discussions, should focus on advancing the merits of the arguments rather than personal attacks, such as questioning the motives of opponents.
* Business models in the field should incorporate long-term, sustainable practices, such as the efficient use of resources, recycling of toxic materials, adequate compensation for workers and other fair labor practices.
* Industry leaders should be collaborative and self-regulating, but also support public education in the sciences and reasonable legislation to deal with legal and social issues associated with nanotechnology.
Tuesday, February 17, 2009
The software giant announced that at the Mobile World Congress being held in Barcelona, Spain, the company and its key mobile partners were unveiling new smartphones with upgraded Microsoft software.
The next generation of phones will be based on Windows Mobile 6.5, Microsoft's new version of operating system for handsets, which is expected to be available in the later half this year
Microsoft wants to create software buzz on mobiles
Microsoft CEO Steve Ballmer's mobile phone strategy: sell a lot of devices.
Even Microsoft CEO Steve Ballmer knows that it's the hardware that gets people to buy a mobile phone.
"The thing that people buzz about is the actual thing they go and buy, which is the phone, which comes from one of our partners," Ballmer said in an interview Monday.
Microsoft aims to change that with its new effort unveiled at the Mobile World Congress in Barcelona: to persuade consumers to buy smart phones - the fastest-growing segment of the handset market - because they are running Microsoft operating system.
The new software will be Microsoft 6.5 in the series _ but will be marketed to consumers simply as Microsoft Phone, with a new user interface and a new browser. Windows also is launching two new services, one that allows users to synch their text messages, photos, video, contacts and more to the Web and an on-line applications store that will bring together the 20,000 applications developed for Microsoft-based phones.
"It is important for us that we have a strong presence and position on the phone," Ballmer said.
More than 20 million devices carrying Microsoft's operating system were sold in 2008. Ballmer said he expects to grow the market share, but he declined to make forecasts.
"The most important thing we'll do is we're going to work with the guys who build phones that are exciting ... that are hot and tell the story of their Windows phone," Ballmer said. "The Windows phone from HTC, the line of Windows phones from Samsung, from LG, really getting with the partner and telling the story of the partner and their device."
To that end, key partners HTC, LG Electronics and Orange also unveiled new Windows phones based on the new Windows operating system in Barcelona. LG said it will dramatically increase the number of phones it offers running Windows, making it the primary operating system for its smart phones. LG said its volume of Windows phones would increase 10 times this year.
Telecoms operators _ notably Vodafone _ have signaled that they want fewer, not more operating platforms.
But Ballmer thinks Windows Mobile is better positioned than the other operating systems because it can run on phones for a range of prices _ from the $600 smart phone to the $250 model.
"Many phones times a small amount of money, hopefully is enough to make this all make sense," Ballmer said in an interview on the sidelines of the four-day GSMA's World Mobile Congress, where Microsoft unveiled a new mobile phone strategy.
The company that best-known for its PC software, but which has been playing in the mobile field for the last seven years, wants to persuade consumers to buy smart phones — the fastest-growing segment of the handset market — because they are running Microsoft operating system.
That may seem counterintuitive. Even Ballmer knows that hardware — not software — is what creates consumer excitement, something in shorter supply as the world economic downturn has dramatically cut consumer confidence.
"The thing that people buzz about is the actual thing they go and buy, which is the phone, which comes from one of our partners," Ballmer said in an interview Monday.
The new software will be Windows Mobile 6.5 in the series — but will be marketed to consumers simply as Windows Phone — will include a new user interface and a new browser. Windows also is launching two new services, one that allows users to synch their text messages, photos, video, contacts and more to the Web and an applications store that will bring together the 20,000 applications that have been developed for Microsoft-based phones.
"It is important for us that we have a strong presence and position on the phone," Ballmer said. While the mobile business is relatively small part of Microsoft's business, it
More than 20 million devices carrying Microsoft's operating system were sold in 2008. Ballmer said he expects to grow the market share, but he declined to make forecasts. Microsoft doesn't say how much it sells the software for, but analysts at the GSMA put it in the ballpark of $5 to $7 per handset.
"The most important thing we'll do is we're going to work with the guys who build phones that are exciting ... that are hot and tell the story of their Windows phone," Ballmer said. "The windows phone from HTC, the line of Windows phones from Samsung, from LG, really getting with the partner and telling the story of the partner and their device."
To that end, key partners HTC, LG Electronics and Orange also unveiled new Windows phones based on the new Windows operating system in Barcelona. LG said it will dramatically increase the number of phones it offers running Windows, making it the primary operating system for its smart phones. LG said its volume of windows phones would increase 10 times this year.
Telecoms operators — notably Vodafone — have signaled that they want fewer not more operating platforms.
But Ballmer thinks Windows Mobile is better positioned than the other operating systems because it can run on phones for a range of prices — from the upper $600 smart phone to the $250 model.
IDC Analyst Francisco Jeronimo said while Microsoft's numbers are pretty good, the battle for operating system (OS) dominance is still wide hope. The big industry players in the increasingly key smart phone market are Google's Android, Nokia's Symbian — which has opened up to outsiders through the Symbian Foundation — the Linux-based open-source software being developed by the LiMo consortium and Palm OS.
"Definitely, Android, Symbian and Windows Mobile will be top OS in terms of smart phones. The challenge now for Microsoft is: No one wants to pay for an OS when they have Symbian and Android for free. What is the point?"
While 20 million devices last year shipped with Microsoft's OS, Nokia shipped 17 million smart phones to western Europe alone, along with 59 million traditional devices. While the big manufacturers seem to be waiting to make their Android announcements during the second half of the year, Jeronimo said Google's open-source software is sure to be a big player in two or three years.
"Microsoft are the ones challenged now," Jeronimo said. "My question is how long will they continue with a proprietary system?"
Michael Gartenberg, vice president of strategy and analysis for Los Angeles-based market research group Interpret LLC, didn't discount the importance of actual sales. But he said the software maker still needs to build buzz among consumers, rather than relying on the device's reputation as a workhorse that synchs up well with Microsoft's Exchange server.
"I think what they're doing now is reminding the market that these devices are the intersection between business and consumer, personal and work life," Gartenberg said.
Monday, February 16, 2009
Using mobile phones has enormous potential for increasing access to healthcare for poor people aroundd the world, and for improving clinical outcomes. Now a new association, the mHealth Alliance, has been launched to support this emerging field and increase the scale and impact of the many small prokects around the world.
So new, the Alliance has so far no website, press release, or organizaton yet, it was announced to the BBC as part of the GSMA World Congress in Barcelona. The mHealth Alliance is currently under the auspices of three foundations, the UN and Rockefeller Foundations in the United States, and the UK-based Vodafone Group Foundation.
Deploying mobiles in health care in developing countries is not only promising for health outcomes, it is also a hot and potentially lucrative business area. There is enormous interest by NGOs, donors, telcoms, mobile vendors, researchers, and governments in the the use of mobile phones for increasing healthcare for the poorest people in the world.
Three foundations have announced their intention to join in a "mobile health" effort to use mobile technology to provide better healthcare worldwide.
The UN, Vodafone, and the Rockefeller Foundation's mHealth Alliance aims to unite existing projects to improve healthcare using mobile technology.
The alliance will guide governments, NGOs, and mobile firms on how they can save lives in the developing world.
The partnership is now calling for more members to help in mHealth initiatives.
The groundbreaking "mHealth for Development" study produced by the UN/Vodafone Foundation Partnership lists more than 50 mHealth programmes from around the world, showing the benefits that mobile technology can bring to healthcare provision.
The report also outlines how such programmes offer value to the mobile industry.
That, said UN/Vodafone Foundation Partnership head Claire Thwaites, is a crucial step in an industry that like so many others stands at the edge of a downturn.
"I think there's a real need to have an alliance," Ms Thwaites told the BBC at the Mobile World Congress (MWC) in Barcelona.
"It's looking at scaling up and bringing governments together with NGOs and corporations, and it will commission pretty rigorous research on what the market opportunity is for mHealth, answering the question: why should a business get involved in this area?"
Bringing a "value proposition" to network operators is what could bring together the individual, small-scale efforts that so far have existed as purely humanitarian endeavours.
Andrew Gilbert, European president of Qualcomm, says that his firm has launched 29 different programmes across 19 countries, involving some 200,000 people, as part of its Wireless Reach campaign.
"It's not a charitable thing, it's very much aimed at allowing these solutions to become self-sustaining," he said.
Because 3G mobile technology is cheap and easily made widespread, Mr Gilbert added, comparatively small amounts of investment can wreak great change in these so-called emerging markets.
Because 3G mobile technology is cheap and easily made widespread, Mr Gilbert added, comparatively small amounts of investment can wreak great change in these so-called emerging markets.
In India, there are 1m people that die each year purely because they can't get access to basic healthcare," said Dan Warren, director of technology for the GSM Association, the umbrella organisation that hosts the MWC.
"The converse angle to that is that 80% of doctors live in cities, not serving the broader rural communities where 800 million people live."
Simply connecting rural areas with city doctors using mobile broadband would allow the provision of better healthcare to more people, and many of the initiatives to date have focused on that kind of connection.
In 2007, the GSMA supported Ericsson in its Gramjyoti project, providing broadband to the remote Indian villages in the southern state of Tamil Nadu.
A band of paramedics in a mobile broadband-equipped van visited the villages and were able to cover vast areas, referring many queries back to doctors in major cities.
Yet mobile technology, as much as it can multiply the efforts of city-dwelling doctors and bring diagnoses to far-flung villages, cannot make up for some shortfalls.
"There's 4 billion mobile phones now in the world, 2.2 billion of those in the developing world," said Ms Thwaites. "Compare that to 305 million PCs and then look at hospital bed numbers: there's 11 million of them in the developing world."
As a result, mHealth projects must also be able to provide an ounce of prevention, and the report sheds light on some particularly successful initiatives.
In South Africa, the SIMpill project integrated a sensor-equipped medicine bottle with a SIM card, ensuring that healthcare workers were advised if patients were not taking their tuberculosis medicine.
The percentages of people keeping up with their medicine rocketed from 22% to 90%.
The medium of text message can overcome sociological barriers as well.
The Project Masiluleke SMS message campaign provided people with free text messages, with the remainder of the 160 characters used to provide HIV and Aids education.
In Uganda, the Text to Change text-based HIV quiz campaign resulted in a 33% increase in calls to an HIV information hotline.
"There are a couple of interesting benefits that the project brought to light," says UN Foundation spokesperson Adele Waugaman. "One of them is the benefit of talking to people in their local language.
"Also, HIV is very stigmatised in South Africa, so people don't like to discuss it publicly. The benefit of getting these private text messages is it's a new form of access that addresses these stigmatisation and privacy concerns."
Healthcare includes improving quality of life as well. One case study from Qualcomm's Wireless Reach programme, - 3G for All Generations - shows how mobile broadband has brought the company together with the Spanish Red Cross and Vodafone Spain to provide a custom software solution for Spain's elderly.
They can have video calls with care providers, call for help, or simply have a chat, providing real social interaction without anyone needing to travel.
Each of these and the many more in the new report showcases the potential of the technology but underlines the significant stumbling block of mHealth so far.
"The biggest problem is fragmentation of small projects," says Ms Thwaites.
"A lot of the work being done on the ground is NGO- and foundation-led, but let's join those efforts with the Microsofts and the Qualcomms and the Intels and the Vodafones.
"There's a business case for it now; you have to have the experience of the NGOs on the ground talking to the big corporates out there and creating real business models, and that's why I think the mHealth Alliance can tackle that."
Thursday, February 12, 2009
JavaFX provides a unified development and deployment model for rich applications across the desktop, browser, and mobile devices. Sun developer Joshua Marinacci describes it this way:
When you write JavaFX desktop apps with the common profile you are also writing for mobile devices. Desktop and mobile aren’t different platforms…
It struck me this morning how much of a big deal this is. I don’t know anything about Java ME, but I know JavaFX. Even though I’m not a mobile developer I can write mobile apps with JavaFX. I couldn’t do that before. One SDK, one set of tools, one language, one set of APIs. There is no JavaFX Mobile. There is only JavaFX.
Joshua believes JavaFX is a fundamental shift in the way user interfaces are developed for Java programs. “I see JavaFX as Swing 2.0: rewrite from the ground up”, he wrote in a recent twitter update. He should know: he’s a member of Sun’s Swing team and co-author of the popular book, Swing Hacks. Swing is currently Sun’s preferred API for building user interfaces for desktop applications, but it’s not supported on Java ME. Now JavaFX can be used for all systems.
On the business side, Sun has lined up an impressive array of partners that plan to deliver JavaFX enabled devices. They include Sony Ericsson, LGE, Orange, Sprint, Cynergy, and MobiTV. Here are a few quotes from today’s announcement:
“Sony Ericsson expects that JavaFX will have a great impact on the mobile content ecosystem and plan to bring JavaFX to a significant part of our product portfolio.” — Rikko Sakaguchi, corporate vice president and head of creation and development at Sony Ericsson Mobile Communications
“We look forward to being the first company to deliver a JavaFX enabled handset so we can build new and exciting features that benefit our customers.” — Woo-Young Kwak, executive vice president, head of LG Mobile Handset R&D Center at LG Electronics, Inc.
“”Sprint, in its continued support of an open framework and ecosystem, views JavaFX as an additional strategic platform in its open toolkit.” — Mathew Oommen, vice president, device and technology development, Sprint
“JavaFX really allows us to leverage our Java ME investment, and reinforce our core mobile video streaming value proposition.” — Cedric Fernandes, vice president, Technology at MobiTV.
There is only JavaFX.
I know I haven't been blogging, twittering, or doing the FaceBook very much lately. That's because I've been very, very busy working. As we promised last summer, the next release of JavaFX coming out soon. For desktop developers you won't notice too many changes, mostly bug fixes (and a feature or two). The big news is that this will be our first release with full mobile support. Of course this really isn't news either, since it's what we promised last summer. In fact, mobile support has been the driver for this release. Pay attention for news coming out soon with the details. So with no news for you, why am I writing this blog?...
I'm running a sample JavaFX app on a demo phone (yes, a real phone). I won't tell you what phone it is but I will say that it has a very nice high resolution screen (no, it's not an iPhone). As I've been working with this device it struck me how easy it was to code for. And there's a very good reason for that. When you write JavaFX desktop apps with the common profile you are also writing for mobile devices. Desktop and mobile aren't different platforms. There is only one JavaFX. Even though there is no mobile emulator for Mac, I've done all of my mobile samples work on my Mac. I write my desktop apps to support window resizing, resize to the approximate size of a mobile device, then save my code. I only switch to Windows every now and then for a quick test in the real emulator. As long as I only use the common profile everything just works.
Google must be on a location-aware kick this month. Just a week after the search firm released its Latitude mobile device friend-tracker, Google Labs has a new tool that lets Gmail automatically include your location in an e-mail's signature.
"Sometime ago, I noticed how all mail systems tell you when an email was written, but not where it was sent from," said Marco Bonechi, the author of the tool on Gmail's blog. "Because I love to travel, the first question in many messages I receive is 'where are you?' and by the time I answer I am often somewhere else."
The experimental feature can be switched on by going to the Labs tab in Gmail settings. Users also need to have their email signature enabled and have the "append your location to the signature" option clicked in the general settings tab.
you can always just delete the location info in the email if you don't want the recipient in the location of your secret bunker – or just embarrassed about what a shut-in you've become.
Gmail give away your location...
Google Inc. certainly is focused on where you are and letting others in on that information.
A week after unveiling Google Latitude, which enables people to track the exact location of friends or family through their mobile devices, the company today announced that its Gmail software can now show the location of e-mail writers.
"Some time ago, I noticed how all mail systems tell you when an e-mail was written, but not where it was sent from," said Marco Bonechi, a Google software engineer, in a blog post. "Because I love to travel, the first question in many messages I receive is "Where are you?" and by the time I answer, I am often somewhere else. So in my 20% time, I wrote an experimental Gmail Labs feature that detects your location and appends the city region and country names to your signature."
Bonechi noted that people can use the new Location in Signature feature by going to the Labs tab in Gmail under Settings and then clicking on Signature Preferences.
"It'll use your public IP address to determine your location, so it may not always be that accurate," he noted. "For example, if you're at Heathrow Airport, IP detection may put you in Germany. If you want more accurate location detection, make sure your browser has a version of [Google] Gears that supports the location module. That way, Gears can make use of Wi-Fi access-point signals to recognize that you're actually in London."
Bonechi also added that users who want to keep their locations private can disable the option or delete their locations from specific e-mails.
Google's tracking technology hasn't received full support from security experts.
Just a day after Google Latitude was released, Privacy International called Google's new mapping application an "unnecessary danger" to users' security and privacy.
Simon Davies, director of the London-based privacy rights group, said in a statement that Google Latitude could be a "gift" to stalkers, prying employers and jealous partners.
But Google was quick to respond. Replying to Computerworld questions in an e-mail, a spokeswoman said the company's engineers and designers took privacy and security concerns into account when they created Google Latitude.
Wednesday, February 11, 2009
In Western Europe, about a fifth of connections are estimated to be due to one user having more than one device, a figure that probably applies to many developed markets, a GSMA spokesman said.
In developing countries, by contrast, phones are often shared.
In the run-up to the Mobile World Congress in Barcelona -- the wireless industry's biggest trade show which starts next Monday -- the GSMA said some 100 million connections were "mobile broadband" connections. This refers to mobile data connections using the high-speed HSPA standard.
The figure reflects the popularity of "dongles" which connect laptops to the Internet via mobile phone networks, as well as phones with high-speed data connections made by Nokia or HTC or the latest version of Apple's iPhone.
Tuesday, February 10, 2009
"A hologram, as you find it on bank notes or credit cards, appears to show a three-dimensional picture, even though in fact it is just two-dimensional," Daniel Grumiller explained. He is at the Institute of Theoretical Physics, Vienna University of Technology.
For decades, scientists have been wondering about the existence of additional dimensions so far hidden to our senses.
Grumiller and his colleagues are trying the opposite approach: Instead of postulating additional dimensions, they believe that our universe could in fact be described by less than four dimensions.
Grumiller is currently working on gravitational theories which include two spatial dimensions and one time dimension. They can be mapped onto a two-dimensional gravitationless quantum theory.
Such theories can be used to describe rapidly rotating black holes or "cosmic strings" — spacetime defects, which probably appeared shortly after the Big Bang.
In such a case, reality has fewer dimensions than we would think it appears to have. This "holographic principle" plays an important role in the physics of space time.
Instead of creating a theory of gravity in all the time and space dimensions, one can formulate a new quantum theory with one fewer spatial dimension.
That way, a 3D theory of gravitation turns into a 2D quantum theory, in which gravity does not appear any more. Still, this quantum theory correctly predicts phenomena like black holes or gravitational waves, said a Vienna release.
We perceive the space around us as three-dimensional, in terms of length, width and depth or height. According to Einstein, time and space are inseparably linked. Adding the time axis to them makes our space-time-continuum four-dimensional.
Creating a unified theory of quantum gravitation is often considered to be the "Holy Grail" of modern science
Daniel Grumiller from the Institute of Theoretical Physics, Vienna University of Technology, can now at least unravel some of the mysteries of quantum gravitation. His results on black holes and gravitational waves are pretty mind-boggling - to say the least. Only recently he won the START prize and will use these funds to engage even more young physicians at the TU Vienna.
We perceive the space around us as three-dimensional. According to Einstein, time and space are inseparabely linked. Adding the time axis to our three-dimensional space makes our space-time-continuum four-dimensional. For decades, scientists have been wondering about the existence of additional dimensions so far hidden to our senses. Grumiller and his colleagues are trying the opposite approach: Instead of postulating additional dimensions, they believe that our universe could in fact be described by less than four dimensions.
“A hologram, as you find it on bank notes or credit cards, appears to show a three-dimensional picture, even though in fact it is just two-dimensional,” Grumiller explains. In such a case, reality has fewer dimensions than we would thinkit appears to have. This “holographic principle” plays an important role in the physics of space time. Instead of creating a theory of gravity in all the time and space dimensions, one can formulate a new quantum theory with one fewer spatial dimension. That way, a 3D theory of gravitation turns into a 2D quantum theory, in which gravity does not appear any more. Still, this quantum theory correctly predicts phenomena like black holes or gravitational waves.
“The question, how many dimensions our world really has, does probably not even have a proper answerprobably cannot be answered explicitly,” Grumiller thinks. “Depending on the particular question we are trying to answer, either one of the approaches may turn out to be more useful.”
Grumiller is currently working on gravitational theories which include two spatial dimensions and one time dimension. They can be mapped onto a two-dimensional gravitationless quantum theory. Such theories can be used to describe rapidly rotating black holes or “cosmic strings” – spacetime defects, which probably appeared shortly after the Big Bang.
Together with colleagues from the University of Vienna, Grumiller is organizing an international workshop, which will take place from April 14 to 24, 2009. Renowned participants, like scientists from Harvard, Princeton, the MIT and many other universities, reveal that the Viennese gravitation physicians are held in high regard internationally.
What should be less controversial is that intelligent spending decisions about funding for high-speed Internet connections can't be made without excellent and transparent data about our broadband infrastructure.
President Obama's commitment to "change" has included a more hands-on approach to promoting broadband. Throughout the presidential campaign, and repeatedly since the election, Obama has emphasized the importance of "expanding broadband lines across America." With input from his telecommunications advisors, the House stimulus bill included $6 billion for broadband. Early versions of the Senate measure raised the total to $9 billion.
Equally important is Obama's commitment to empirically-driven policymaking. In January, Obama became only the second president—after William Howard Taft in 1909—to invoke "statistics" in an inaugural address, when he spoke of "the indicators of crisis, subject to data and statistics."
Yet almost none of this $8 billion in statistical spending goes to compiling information about broadband, the infrastructure of the knowledge-based economy. And the data that has been collected has been made to mislead.
The FCC—the official record-keeper on private-sector telecommunications—for years claimed that there was adequate competition in broadband because the median ZIP code was served by eight separate providers. The Government Accountability Office's assessment of the same data found a median of two providers per ZIP code. Worse, the FCC refuses to release the information that it has about competition.
A variety of organizations—including my own free web service, BroadbandCensus.com—have stepped in to do our best at collecting, compiling and releasing public broadband information. We believe that if you want to build a road, you need a map that tells you where existing roads lie before you begin taking construction bids, let alone start pouring concrete. Where will our nation's new broadband highways, by-ways and access points be built? Who's going to let the contracts? Who will own this infrastructure?
These questions can't be answered without detailed broadband data. To that end, I've supported a proposed "State Broadband Planning and Assessment Act," which could be introduced as an amendment to the fiscal stimulus measure. The goal of this effort, as of BroadbandCensus.com, is to unleash the Internet as means of sharing information about the Internet itself.
For two-and-a-half years, I've been trying to get access to basic broadband data for the public, including citizen-consumers, businesses, and local policy-makers. I've been seeking to identify which carriers offer service in a particular ZIP code, as well as smaller units, like census blocks. In September 2006, when I headed a project at the Center for Public Integrity that investigated the telecommunications industry, we filed a Freedom of Information Act lawsuit against the FCC to force them to release basic broadband data about carriers by ZIP code.
The project obtained and displayed similar location information about broadcasters and cable operators from the FCC's cumbersome web site. But our attempts to get broadband data were thwarted by the FCC and by industry. AT&T, Verizon Communications, and the lobbying organizations representing the Bell companies, the cable companies, the cell phone carriers, and wireless broadband providers all asked the FCC to deny information to the public. Even though every consumer who buys broadband knows the name of the company that provides them with service, the telecoms argued that compiling this information into a single location would reveal "proprietary" data. The FCC agreed.
Congress was critical of the FCC's meager broadband statistics. In October it passed the Broadband Data Improvement Act to prod the agency to collect broadband data at a level more granular than the ZIP-code. The FCC began doing just that in June, as the bill was working its way through Congress.
But under pressure from telecom lobbyists, Congress dropped a core provision from the House version of the bill: the requirement that a separate agency, the Commerce Department's National Telecommunications and Information Administration, take responsibility for conducting a national broadband census and producing a public map with the names of individual carriers and where they offered service.
The House version of the stimulus bill reintroduces the NTIA broadband map. But it takes out any mention of publicly releasing individual carrier names. Worse, the Broadband Data Improvement Act enshrined the business model favored by the carriers: providing information to an entity like Connected Nation, which agrees to excise the names of broadband providers from the maps they produce.
The House stimulus bill allocated $40 million to this business model. Last week's version of the Senate stimulus bill upped the total to $350 million.
President Obama has the opportunity to make broadband a priority in his administration by ensuring that the NTIA creates a public map of our national broadband providers and infrastructure. Map in hand, the Obama administration's broadband policy should be guided by three important principles:
1) Use the Internet to empower citizens and consumers.
With the FCC keeping broadband data out of the hands of the public, I started BroadbandCensus.com to publish the same information that any consumer can know: the name of their Internet service provider and type of broadband connection, how much they are charged for service, and the Internet speeds they are promised and actually delivered. The government of Ireland publishes exactly the same information on its communications ministry web page.
Some broadband data efforts focus on the needs of telecommunications carriers and their unionized employees. Based in Kentucky, Connected Nation has been promoting their state-wide maps of broadband availability as a means for providers to sell more service. The Communications Workers of America's Speed Matters campaign has collected random speed tests from Internet users to provide a snapshot about download and upload speeds. Both of these initiatives are good, so far as they go.
But to rigorously understand the condition of broadband, we can't rely only on the information provided by the carriers. It needs to be verified by Internet users. To truly unlock the power of Internet-enabled "crowdsourcing," an effective broadband strategy must focus on citizens. Empower them by releasing basic information and letting citizen-consumers add to the mash-up. It's about making citizens contributors as well as constituents.
2) Ensure that infrastructure investment is made on the basis of cost-benefit data.
In 1790, the United States was the first country to institute a periodic national census. What started as a questionnaire seeking only demographic information had broadened by 1840 to information about employment in mining, agriculture, manufacturing, and the "learned professions and engineers." Such information has enabled our government, our universities and our business sector to rely on good-quality statistical information.
We're going to need that kind of data, and a lot more of it, to make sound investment decisions about broadband. Because of our nation's agricultural origins, our statistical agencies provide far more data about crop production than they do about broadband availability, speeds, or prices. In the absence of good data, the temptation is to make public infrastructure investment decisions based on political pressure or lawmaker influence, rather than upon solid cost-benefit analyses.
3) Use the transparency of the Internet to regulate incumbents through public disclosure.
The regulatory philosophies of the New Deal—maximum and minimum wages and prices, hands-on federal regulation—have faded and are not likely to be revived even in the current crisis. Yet one Depression-era innovation of Franklin D. Roosevelt remains as valid as ever: the disclosure-based regime of the Securities and Exchange Commission.
The SEC is vigilant in requiring punctilious compliance with requirements that public companies disclose details of their operations. By and large, the SEC doesn't require substantive actions so much as it requires procedural compliance and full disclosure. Open information flows mean that poor corporate decisions are punished in the marketplace.
Equally important is that role that independent efforts, like those of BroadbandCensus.com and others, can play in collecting and aggregating public broadband data about speeds, prices, and reliability.
For more than a year, BroadbandCensus.com has provided a platform allowing Internet users to compare their actual broadband speeds against what they are promised by their carriers. We use the open-source Network Diagnostic Tool (NDT) of Internet2. All speed test data is publicly displayed under a Creative Commons license. This approach to public monitoring Internet traffic has recently been followed by Google and the New America Foundation and their "Measurement Lab" initiative, which also uses NDT.
Ultimately, broadband carriers that offer good speeds and good service will see the value in an objective and transparent broadband census. Fortunately, consumers don't need to wait on the carriers to begin collecting and publishing broadband of their own.
Neither should the government. No matter how much Congress decides to allocate to stimulate broadband, it should insist that information about speeds, prices, technologies, and specific locations of high-speed Internet availability are publicly available to all.
1. The focus should be on the so-called "last mile" or local access network portion of the system. There's a broad consensus that the lack of adequate broadband access in the United States is due to technological shortcomings on this segment of the telecommunications infrastructure, its weakest link. The overall goal should be full build out of this currently incomplete but vital infrastructure to serve all residents and businesses.
Sunday, February 8, 2009
when it comes to advanced vehicle batteries :innovation and a little stimulus money could head off another OPEC scenario
Most of the big players in advanced batteries - the ones used to power the cars of tomorrow - are from Japan, South Korea or China.
America's battery industry is in need of a shock. Enter Stimulus.
As part of the nearly $900 billion economic lifeline, lawmakers plan on spending $2 billion in loan guarantees and grants for makers of advanced batteries. They don't necessarily have to be U.S. companies that get the money, but they need to set up shop on American soil.
Industry observers have high hopes for the plan, but worry that the money won't be doled out fast enough or that it will be eaten up by a few big players.
Stimulus money: Not chump change
The money involved may seem small by stimulus standards, but for the nascent high-tech battery business it's serious cash.
Two billion is more money than what's flowed into the sector from venture capital and private equity firms over the last four years combined, said Heather Daniel, a power storage analyst at the research firm New Energy Finance.
"It could be a significant boost," said Daniel. "If there's a little guy that's got the technology, it could have big implications."
Most advanced batteries for plug-in hybrid electric vehicles, like the Chevy Volt set for debut in 2010, rely on lithium ion batteries.
Lithium batteries - where the ultra light metal lithium is used as the conducting material - are more efficient and lighter than the nickel-metal hydrate batteries currently used in the hybrid cars of today.
Having a light and efficient battery is essential if cars are to move from current hybrids - which use battery power only for low speed driving - to plug-in hybrids where battery power is the only thing turning the wheels.
Getting the right battery is key to making plug-in hybrids commercial - current batteries are still a bit too heavy and a bit too expensive. (There may also be an issue with getting enough lithium - much of it appears to be concentrated in a few South American counties, but that's another story.)
King of battery hill
The company or companies that nail the technology are potentially set for big profits and big hiring sprees. And currently, while many foreign firms have manufacturing operations in the United States, most of those companies are not headquartered here.
Japan's Panasonic, NEC, and GS Yuasa; South Korea's LG; and China's BYD are the main players in this market, and account for nearly all current lithium ion sales.
"The United States is alarmingly vacant from this list," said Rob Wilder, manager of the WilderHill clean energy index, an investment fund. "It's painful as a patriotic American to see just how far behind we are."
That said, U.S. firms are not out of the game.
Companies like Johnson Controls (JCI, Fortune 500), Ener1 (HEV), Maxwell (MXWL), Valence (VLNC), and privately held A123 Systems are noted for innovative, advanced-vehicle battery technology, if not a huge amount of current sales.
General Motors is working with A123 Systems on the Volt, although it seems LG will make most of the initial batteries.
There's also an array of smaller American startups that are scraping by while they search for venture capital funding.
Wilder said that for the stimulus money to be effective, it should be available to these smaller companies that might have good designs but lack lobbying power.
"They just don't have the resources to get the money like GM or Ford, who came late to the game anyway," he said.
What are companies' eligibility requirements for this money, what projects will get funded, and over what period of time is what the industry wants to know, said Joseph Muscat, a clean tech director at the accounting and advisory firm Ernst & Young.
A spokesperson for the House subcommittee that wrote the battery portion of the bill said those specifics would be hammered out by the Department of Energy if and when the bill gets approved.
"Clearly, it's a help," said Muscat. "There are a lot of companies here, and it will be interesting to see how the technology plays out."
The measure, which congressional leaders hope to finish next week, currently proposes:
increasing and extending unemployment insurance;
expanding coverage to more low-income and part-time workers;
subsidizing health insurance coverage;
and recharging state unemployment insurance trust funds, which are running dry as layoffs climb.
A growing number of people are depending on unemployment benefits, with continuing jobless claims hitting a record 4.79 million in late-January. The figures are sure to grow with companies shedding more than 250,000 jobs so far this year.
The federal government releases January's national unemployment rate on Friday. Currently at 7.2%, it is expected to rise to 7.5%.
Bigger checks: The unemployed would see their checks rise by $25 a week, paid for with federal funds. The current average weekly benefit is $297.
The increase would have a big impact on those at the lower end of the pay scale who are likely to spend it all, said Wayne Vroman, economist with the Urban Institute.
Mike Grigsby of Portland, Ore., would certainly welcome another $25 a week. The political organizer is scraping by on a weekly pre-tax benefit of $304, which barely covers his $545 monthly rent and other expenses.
"I could have fresh fruits and vegetables in the house, instead of canned goods," said Grigsby, 37, who has been unemployed since November. "I could buy a new interview suit at Goodwill."
The Senate version would also forgive income taxes on the first $2,400 of benefits.
Extended benefits: The bill push back the deadline to apply for extended benefits.
The jobless typically get 26 weeks of unemployment insurance, paid for by the states. Last summer, the Bush administration and Congress added an additional 13 weeks of benefits, paid for by the federal government.
In November, federal officials added another seven weeks of benefits in all states. Those who live in states with unemployment rates higher than 6% -- 34 states meet that criteria as of December -- could receive a total of 20 additional weeks.
The federal program is set to expire in March, but under the stimulus package, the jobless could apply for the extended federal benefits through Dec. 31.
With the deepening recession making it harder for people to land new positions, extending benefits is crucial, said Heidi Shierholz, economist at the Economic Policy Institute.
Randall Paynter depends on his $320 weekly unemployment check to support his family. Even though his wife works full-time, they are living on half of what they did before Paynter lost his job as a warehouse supervisor in May.
A Rome, Ga., resident, Paynter is back in school studying computers in hopes of getting a job in automated manufacturing. But he doesn't graduate until 2010 so he hopes the federal government keeps extending the benefits.
"I need as much time as I can get to get retrained," said Paynter, 54, who has an 11-year-old son.
Expanding coverage: The package would enact the Unemployment Insurance Modernization Act, which provides $7 billion in incentives for states to expand the ranks of jobless that qualify for benefits.
States that allow workers to count more recent wages in their applications could share in $2.3 billion. The remaining funds would go to states that adopt additional reforms, including providing benefits to those seeking part-time work and those who quit because of a family member's illness or relocation of a spouse.
States would also divvy up $500 million to cover administrative costs.
This expansion would allow more women, part-timers and low-wage workers - who are often the most vulnerable of the unemployed - to collect benefits, Vroman said.
Subsidized access to health insurance: The bill would allow many workers to continue coverage under their former employer's health insurance, known as Cobra, by subsidizing 65% of the premiums for as long a year. The benefit would apply to those who lose their jobs between September 2008 and the end of 2009.
The typical family premium under Cobra is $1,000, according to House Speaker Nancy Pelosi, D-Calif.
The House bill would also allow workers who are 55 and older, or have been with their employer for at least a decade, to extend their Cobra until they become eligible for Medicare or secure coverage with another company.
Also, those unemployed whose family's gross income is below 200% of the federal poverty guidelines could temporarily receive Medicaid, under the House bill. The benefit, paid for by the federal government, would apply to those who lose their jobs between September 2008 and the end of 2010.
Assistance for states: The package would temporarily waive interest payments and accrual of interest on loans taken by states to pay unemployment benefits. Five states - Indiana, Michigan, New York, Ohio and South Carolina - are currently borrowing from the federal government.
Safety and stimulus
Advocates for the jobless are hailing the provisions in the package, saying they will help those struggling to survive while looking for work, said Christine Owens, executive director of the National Employment Law Project.
On top of that, it will help stimulate the economy, since most people receiving unemployment benefits spend the money quickly, economist said.
Saturday, February 7, 2009
Along with Surrey Satellite Technology (SSTL), Virgin Galactic will look to use a system that is comparable to the United States' Pegasus system – launching satellites into orbit from a plane.
The idea is that the same transport that takes the tourists – WhiteKnightTwo – could serve as the platform to put satellites into orbit for a fraction of the current cost.
"Hopefully we can do it for a lot less money than the current providers," SSTL's Dr Adam Baker told PA.
"It costs something like five million to 10 million dollars at the moment to get one of our smaller satellites into space. What we are targeting is to see if we can do this for a million dollars.
"I think that's a very challenging number but I'm confident we can get very close to that - and if you could build the satellite itself for a couple of million dollars, all of a sudden you've got a very attractive package for well under five million dollars that lets your customers do something pretty capable in orbit."
It is 38 years since the UK government abandoned its successful satellite launcher programme, Black Arrow.
The new venture would be an entirely commercial exercise.
It would see a two-stage rocket launch from underneath a carrier aircraft.
The concept would look similar to the US Pegasus system, which uses a former airliner to lift a booster to 40,000ft, before releasing it to make its own way into orbit.
"If we had our own launcher - something modest, not an enormous vehicle - for a reasonable price, we could service our own needs, both scientific and military, and we could also sell the service on the open market."
SSTL's ideas are being developed with Virgin Galactic, the company set up by billionaire Sir Richard Branson to take fare-paying passengers on short, weightless hops above the atmosphere.
Galactic has a carrier aeroplane, known as White Knight Two. Its primary function will be to lift the space tourists' rocket plane to its launch altitude.
But Galactic also wants to pursue other uses for the White Knight craft, and the idea of using it as a platform to release a British satellite launcher is an appealing one.
"It was based on a then civil service that thought there wasn't going to be a market. They were wrong."
SSTL and Virgin Galactic are hoping to get the backing of the UK science and innovation minister, Lord Drayson, in trying to see if there is interest in government in helping to fund a short feasibility study.
But any launcher system that did eventually emerge would be a commercial service, not a government operation.
"We'd be looking at a range from 50 to up to a maximum of 200kg because you'd want to do different sizes of satellite," said Mr Whitehorn.
Dr Baker added: "Hopefully we can do it for a lot less money than the current providers.
"It costs something like $5m-$10m at the moment to get one of our smaller satellites into space. What we are targeting is to see if we can do this for a million dollars.
"I think that's a very challenging number but I'm confident we can get very close to that - and if you could build the satellite itself for a couple of million dollars, all of a sudden you've got a very attractive package for well under $5m that lets your customers do something pretty capable in orbit."
Dr Baker is convinced all the expertise - in composite structures, guidance and avionics, propulsion, etc - exists in the UK to make it happen, but a study would have to prove the technical case and a viable business model.
Although a number of other groups in the UK have pursued a satellite launcher capability, the pedigree of SSTL and Virgin Galactic is likely to make potential investors sit up and take notice.
SSTL is perhaps best known for its Disaster Monitoring Constellation satellites which map the Earth at times of emergency at resolutions between 4m and 32m.
It also produced Giove-A, the first demonstration spacecraft for Europe's forthcoming sat-nav system, Galileo.
SSTL is owned by EADS Astrium, Europe's biggest space company. Astrium is the prime contractor on the mighty Ariane 5 rocket, which lofts some of the biggest satellites in the world.
Virgin Galactic has yet to start its space tourism service. It unveiled White Knight Two last year, and expects to roll out its tourist spaceliner, SpaceShipTwo, later this year.
Mr Whitehorn said the rocket plane also had great potential for doing microgravity research.
"You could take scientists up instead of space tourists and they could conduct their experiments 'live' in the period of microgravity you get on SpaceShipTwo, which is greater than you can get currently with zero-G aircraft.
"And of course you would have the scientists there in a way you couldn't with a sounding rocket, for example."
Wednesday, February 4, 2009
The scientists said the development may eventually enable chips to replace damaged nerve or muscle fibres.
They also said the development could possibly be used in the development of prosthetics in the future.
During the chip manufacturing process, the scientists printed patterns on the smooth silicon surface.
The technique also works with stem cells.
It is hoped the method will eventually enable any type of tissue to be grown on a tailor-made pathway and implanted as prosthetic tissue in the body.
Professor Alan Murray, head of Edinburgh University's School of Engineering and Electronics, who led the research, said: "This is a small but important step on the path towards the long-term goal of many scientists and medical experts - to develop surgical implants using silicon chips.
"We can now make silicon chips with circuitry as well as pathways where cells can grow in the body.
"One of the areas this could be used in is prosthetics - if we can cause cells from damaged tissues to grow where we want.
"It is going towards the realms of science fiction - there is a definite Incredible Hulk feel about it."
He added: "We also hope that, rather sooner than this, the technique will allow better methods of drug discovery and reduce the need for animal testing, as new medicines could be tested on chips rather than in live creatures."
The research was funded by the Engineering and Physical Sciences Research Council.
Tuesday, February 3, 2009
With an upgrade to its mobile maps, Google Inc. hopes to prove it can track people on the go as effectively as it searches for information on the Internet.
The new software to be released Wednesday will enable people with mobile phones and other wireless devices to automatically share their whereabouts with family and friends.
The feature, dubbed "Latitude," expands upon a tool introduced in 2007 to allow mobile phone users to check their own location on a Google map with the press of a button.
"This adds a social flavor to Google maps and makes it more fun," said Steve Lee, a Google product manager.
It could also raise privacy concerns, but Google is doing its best to avoid a backlash by requiring each user to manually turn on the tracking software and making it easy to turn off or limit access to the service.
Google also is promising not to retain any information about its users' movements. Only the last location picked up by the tracking service will be stored on Google's computers, Lee said.
The software plots a user's location — marked by a personal picture on Google's map — by relying on cell phone towers, global positioning systems or a Wi-Fi connection to deduce their location. The system can follow people's travels in the United States and 26 other countries.
It's left up to each user to decide who can monitor their location.
The social mapping approach is similar to a service already offered by Loopt Inc., a 3-year-old company located near Google's Mountain View headquarters.
Loopt's service already is compatible with more than 100 types of mobile phones.
To start out, Google Latitude will work on Research In Motion Ltd.'s Blackberry and devices running on Symbian software or Microsoft Corp.'s Windows Mobile. It will also operate on some T-1 Mobile phones running on Google's Android software and eventually will work on Apple Inc.'s iPhone and iTouch.
To widen the software's appeal, Google is offering a version that can be installed on personal computers as well.
The PC access is designed for people who don't have a mobile phone but still may want to keep tabs on their children or someone else special, Lee said. People using the PC version can also be watched if they are connected to the Internet through Wi-Fi.
Google can plot a person's location within a few yards if it's using GPS or might be off by several miles if it's relying on transmission from cell phone towers. People who don't want to be precise about their whereabouts can choose to display just the city instead of a specific neighborhood.
There are no current plans to sell any advertising alongside Google's tracking service, although analysts believe knowing a person's location eventually will unleash new marketing opportunities. Google has been investing heavily in the mobile market during the past two years in an attempt to make its services more useful to people when they're away from their office or home computers.
And 81% improved by at least one point on a scale of neurological disability, The Lancet Neurology reported.
Further tests are now planned, and a UK expert called the work "encouraging".
MS is an autoimmune disease which affects about 85,000 people in the UK.
It is caused by a defect in the body's immune system, which turns in on itself, causing damage to the nerves which can lead to symptoms including blurred vision, loss of balance and paralysis.
Over a 10-15 year period after onset, most patients develop secondary-progressive MS, with gradual but irreversible neurological impairment.
It is not the first time this treatment - known as autologous non-myeloablative haemopoietic stem-cell transplantation - has been tried in people with MS, but there has not been a great deal of success.
The researchers at Northwestern University School of Medicine in Chicago said most other studies had tried the transplants in people with secondary-progressive MS where the damage had already been done.
In the latest trial patients with earlier stage disease who, despite treatment had had two relapses in the past year, were offered the transplant.
Stem cells were harvested from the patients and frozen while drugs were given to remove the immune cells or lymphocytes causing the damage.
The stem cells were then transplanted back to replenish the immune system - effectively resetting it.
Five patients in the study relapsed, but went into remission after receiving other therapy.
The researchers are now doing a randomised controlled trial in a larger number of patients to compare the treatment with standard therapy.
Study leader Professor Richard Burt said this was the first MS study of any treatment to show reversal of damage.
"You don't want to wait until the horse has left the barn before you close the barn door - you want to treat early.
"I think the reversal is the brain repairing itself.
"Once you're at the progressive stage you have exceeded the ability of the brain to repair itself," he said.
However, he cautioned that it was important to wait for the results of the larger trial.
And that he would not call it a cure but "changing the natural history of the disease".
Dr Doug Brown, research manager at the MS Society, said the results were very encouraging.
"It's exciting to see that in this trial not only is progression of disability halted, but damage appears to be reversed.
"Stem cells are showing more and more potential in the treatment of MS and the challenge we now face is proving their effectiveness in trials involving large numbers of people."