Anticipating Collisions between Spacecraft and Space Junk

In September, a piece of debris broke off from a 19-year-old nonoperational NASA satellite 330 miles up in the sky. The United States Space Surveillance Network (SSN), which is responsible for monitoring the more than 22,000 satellites and other objects in orbit, detected the event, plotted out the fragment's orbital path, and determined that it was headed for the International Space Station (ISS). If it hit the $100 billion laboratory, the junk could cause catastrophic damage. Upon receiving the warning, NASA decided to maneuver the spacecraft out of the path of the debris, a task that it now performs about twice a year. The threat of such a collision has more than doubled in just the past two years, says Nicholas L. Johnson, NASA's chief scientist for orbital debris.

More than half a million man-made objects the size of a marble or larger are now circling Earth?and 15,000 of those are bigger than a fist. This orbital debris, or "space junk," includes inactive satellites, spent rocket bodies, materials from solid rocket motors, collision fragments, and mission waste. Most operational spacecraft use protective shielding to mitigate the impact of objects less than one centimeter in diameter. But since the larger ones are racing around Earth at speeds of five miles per second, any one of them could destroy any satellite it collided with. The situation imperils the $160 billion satellite services industry, which plays a critical role for international phone calls, television broadcasts, climate and weather data, and military surveillance.

The growth rate of debris objects larger than ten centimeters orbiting the Earth.


Credit: NASA

To understand how such threats will evolve and to foresee the paths of space junk so that collisions can be avoided, NASA developed one of the world's most sophisticated predictive models. Called Legend (for "low-Earth to geosynchronous environment debris"), the three-dimensional model simulates the routes of all trackable space objects and even factors in new debris from future crashes. To take uncertainty and randomness into account, hundreds of scenarios are generated using the Monte Carlo method, a set of algorithms that can calculate risk factors in a complex environment. With Legend, NASA scientists use the average of multiple simulations to estimate the number, size, and type of objects that will collide?and approximately how often. Unlike models used by the U.S. Strategic Command Joint Space Operations Center, which detects and tracks large objects and screens active satellites daily for possible collisions within 72 hours, Legend includes smaller fragments and looks far into the future.

In place since 2004, the NASA model is constantly fed with data gathered from the results of ground tests and spacecraft that have broken up in orbit; from telescopes and radars viewing the sky; and from analysis of crater-marked spacecraft surfaces that have returned to Earth. That means new simulations must be run continually. Legend enables scientists to calculate the consequences of a particular breakup or collision and helps them alert managers at the space station that a piece of debris could be in its path. The model also advises soon-to-launch satellites of areas to avoid and will guide scientists as they attempt to develop and launch debris removal technology for the first time.

Powered by WizardRSS | Best Membership Site Software

Source: http://feeds.technologyreview.com/click.phdo?i=5115a67cee6d8ddfc4f8ee1bb54fa305

barbara walters 10 most fascinating people coachella 2011 lineup bea arthur ron santo funeral

Dim Prospects for Energy R&D Funding

As Congress rushes to finish its business before the year's end, it is likely to pass one of two spending bills to keep the government running in 2011. Either way, funding for energy R&D is expected to be stagnant next year and decrease in 2012.

On Wednesday, the House passed a $1.1 trillion spending bill. If the Senate follows suit, which is by no means certain, overall funding for energy R&D will remain level?but some specific research programs could take a significant funding hit. Congress may yet pass another version of the spending bill that will rescue some of these programs. However, the outlook for energy R&D remains bleak, especially for 2012.

All this comes after a year of rhetoric from the White House and key members of Congress about the urgent need for more energy R&D. But Congress failed to pass comprehensive climate and energy legislation that would have directly funded new energy projects, leaving funding to the regular budget, which has been limited. Congress also hasn't passed a single appropriations bill yet this year?the government has been running on temporary spending bills since October?and it is running out of time to pass one in the current session.

Under the president's proposed budget?a document submitted by the administration in February and meant to guide Congress in crafting its spending bills?several energy-related research programs would have received significant new funding. It called for $300 million for ARPA-E, an agency founded to foster high risk but potentially high reward R&D. The first projects funded by ARPA-E include one focused on developing a cheaper way to make silicon wafers for solar cells and another investigating new battery designs that could give electric cars a range of 500 miles. The President's budget also requested a $218 million increase in spending for energy R&D in the Department of Energy's Office of Science, as part of a long-term plan to double funding for physical sciences research in order to keep the United States competitive in this area?a goal that's part of the 2007 America Competes Act.

The president's budget also called for the addition of a fourth "Energy Innovation Hub" to the three that were approved by Congress last year. These innovation hubs, devised by Energy Secretary Steven Chu, are meant to bring together the best researchers and engineers to tackle key energy-related issues, in the style of the Manhattan Project or Bell Labs. Last year, Congress funded an innovation hub for making fuels using sunlight, another for increasing the energy efficiency of buildings, and a third for simulation tools to advance nuclear energy. Each of these would get $24 million under the president's proposed budget. The new hub for developing better batteries would get $30 million. The budget also called for spending on related Energy Frontier Research Centers to increase from $100 million to $140 million.

Congress is considering two options. The first is to adopt a continuing resolution that will keep overall funding levels equal to 2010's (about $50 billion less than the president's budget), while rearranging where some of that funding goes. That's the spending bill the House passed on Wednesday, and it now goes to the Senate. The other option is a more comprehensive "omnibus" bill that lumps together a dozen appropriations bills that committees have crafted based on President Obama's budget request but modified in ways meant to make them more likely to pass or to reflect the preferences of the committee members. Overall, the omnibus bill would be better for energy R&D funding, though not as good as the budget the president requested.

In either of the options before Congress now, ARPA-E is likely to continue to get funding. Although ARPA-E was created in 2007, it wasn't funded until the Recovery Act of 2009, and it has been running on that money since, without substantial funding from the regular budget. Keeping funding at 2010 levels could have prevented ARPA-E from funding any new projects, or even killed it. The House continuing resolution, however, allows the Department of Energy to give ARPA-E up to $300 million?the amount the president requested. But this must come at the expense of other DOE research funding, either for the Energy Efficiency and Renewable Energy program or the Office of Science. If the omnibus bill passes instead, ARPA-E is likely to get funding of its own, but about $100 million less than the President requested.

Under the continuing resolution, funding for the Office of Science, the Energy Frontier Research Centers, and the Energy Innovation Hubs will continue at 2010 levels instead of getting the increases President Obama requested. But the omnibus bill will be good news for some of these energy R&D programs. The Office of Science will get a small increase ($70 million as opposed to the $218 million the President requested), and the new battery innovation hub is likely to be funded.

It's not clear which option?the omnibus bill or the continuing resolution?will win, says Patrick Clemins, director of the R&D budget and policy program for the American Association for the Advancement of Science. The House has passed the continuing resolution, but the Senate may prefer passing an omnibus bill. What is clear is that funding for energy R&D overall will remain essentially flat?a trend that's been going on since 2004, he says, in spite of many calls for increased energy R&D over this time.

Things might be even worse in the 2012 budget. The new Republicans in the House were elected with a mandate to decrease government spending. ARPA-E, the benefits of which won't be clear for years and whose funding goes largely to research in Democratic states, "is the kind of thing that is easily killed," says David Victor, a professor at the School of International Relations and Pacific Studies at the University of California, San Diego. Mark Muro, a senior fellow at the Brookings Institute, hopes that Republicans and Democrats can find common ground in some areas, such as support for nuclear power. But, he says, there is widespread fear that existing energy R&D will be cut. "We could be playing defense rather than moving it forward," he says.

Powered by WizardRSS | Best Membership Site Software

Source: http://feeds.technologyreview.com/click.phdo?i=73b531b486381416d8cf7851848fad6b

david epstein joseph brooks elizabeth smart story salvia effects

A Magnetic Shortcut to Clinical Trials

Even the most promising drug will fail if it never reaches its target. So before starting large clinical trials, pharmaceutical companies must determine, among other things, the precise dosage to use, a process that can be expensive and time-consuming. Scientists investigating a drug for Parkinson's disease have now shown how an MRI scan can quickly determine the optimal dosage for drugs that act on the brain.

The most precise way to track drugs as they move through the body is a PET (positron emission tomography) scan, in which a drug is a radioactively tagged, injected into the body, and tracked with a scanner. But PET scans have several drawbacks, notes Kevin Black, associate professor of psychiatry at Washington University in St. Louis, Missouri, who led the new research, published in the December issue of The Journal of Neuroscience. PET scanners, and PET scans, are very costly. And because they expose subjects to radioactivity, multiple PET scans can pose a health risk. As an alternative to PET scans, drug companies sometimes spend months to years assessing optimal dosages via clinical measures such as mood questionnaires or tests of patients' manual dexterity.

The Washington University study, funded by Synosia Therapeutics, is the first to track a drug's effect with an MRI technique called arterial spin labeling (ASL). Using this approach, the researchers determined the optimal dosage of the Parkinson's drug noninvasively, without injections or radioactivity, in four months.

The researchers focused the MRI machine on subjects' neck arteries to tag water molecules in the blood by changing their magnetic properties. These water molecules were visible in subsequent scans, providing a picture of arterial blood flow to particular parts of the brain.

The researchers took scans before and after the administration of different doses of the drug. When they compared the shots, Black and colleagues could see immediately which areas of the brain implicated in Parkinson's showed increased blood flow, owing to the action of the drug. This allowed them to identify the most effective dosage for further testing.

Using ASL to accelerate the move to large trials will interest drug companies as a cost-cutting measure, says the University of Pennsylvania's John Detre, a neurologist who developed ASL in the early 1990s and was not involved in the new research. "A key go/no-go decision in drug development early on is whether the drug is getting into the brain and doing what you think it's doing," he says. "This study is a fantastic proof of principle."

ASL doesn't have the specificity of PET scans, which can track the way drugs act at a molecular level, says Luis Hernandez of the University of Michigan's MRI Research Facility. "But if you want to know if the drug is changing the part of the brain it should be reaching," he remarks, "then this works well."

An obvious target for ASL is antidepressants, which take two to six weeks or longer to show a clinical effect. With ASL, it is possible to see very quickly whether the drug is affecting the brain?an indication that it could be effective in alleviating depression. Detre adds that the technique could see more use in other areas of drug development: "You might be able to use this one technique to look at the effects of a very broad range of drugs on the brain."

Powered by WizardRSS | Best Membership Site Software

Source: http://feeds.technologyreview.com/click.phdo?i=1092beb1059e232c821062d101dd2f8f

railgun barbara walters 10 most fascinating people coachella 2011 lineup bea arthur

Blog - A Lego Reconstruction of the World's Earliest Computer

Here's a brand new stop motion video of a reconstruction of the world's first mechanical computer, directed by occasional Technology Review contributor John Pavlus. It's entirely self-explanatory: watch it and read on.

One hundred years before the birth of Christ, when agriculture and the wheel was for most of human civilization the apex of technological achievement, the Greeks built a mechanical computer so sophisticated that it could add and subtract--all in the name of predicting the next lunar eclipse.

In 2010, Apple engineer Andrew Carol created, based on previous reconstructions of the so-called "Antikythera" mechanism, which was discovered in a shipwreck in 1901, a fully functioning Lego replica of the device. Like the original, it accurately predicts solar eclipses.

Its secrets are explained at length in a feature in Nature and its Wikipedia entry, which, not surprisingly for a device that is catnip for geeks, is as exhaustive as the plot exegeses of old episodes of Lost.

The Antikythera is such a marvelous device--Michael Edmunds of Cardiff University, who led the most recent study of the device, says it is more historically valuable than the Mona Lisa--that it continues to inspire great works of its own: first the historically faithful reconstructions of it, then the lego reconstruction, and now this stop motion video, which, like all stop motion, was an enormous effort in itself.

Here's a sped-up, behind the scenes video of the shoot:

And if you want to really dig deep, Pavlus has conducted an interview with the creator of the Lego version of the Antikythera. It includes fascinating details about managing the friction generated by the more than 100 gears in the mechanism. Here's Carol's account of how it works:

It's pretty simple; it's all about ratios between the numbers of teeth on two gears meshed together. If one gear has 50 teeth and another has 25, that's a 2-to-1 ratio -- which means that turning the axle one full revolution on the first gear will multiply by two, because it turns the second gear twice as fast.

But the tradeoff is that when you make it go fast, you lose power. It's fast, but it's not strong, and vice versa -- and those mechanical effects pile up quickly when you've got over 100 gears working together in exotic ratios. When I have to multiply by 127, it's got to turn very fast, but with little power, which means that whatever amount of friction there is, I've effectively multiplied it by 127. So I had to put a lot of thought into designing the optimal layout of gears that would minimize the friction enough to make that kind of calculation physically work.

Finally, there's Pavlus's account of how he brought the video project itself together. For anyone interested in how to explain complicated technology to a lay audience, it's quite a ride:

But how to actually execute that idea? Obviously some kind of animation would be necessary. Several people I consulted urged me to use computer graphics. But that felt wrong: Legos are wonderfully tactile, and I really wanted to highlight the machine's intricate physical detail ? to make you feel like you could literally reach out and touch the gears or turn the crank. CGI would feel too weightless and abstract ? too perfect. Andy's model was the quintessence of DIY hacking: he didn't even diagram it out before starting to build it. I needed animation that was physical, craft-ey, and a little bit rough around the edges. Stop-motion was the clear choice.

For even more background on the Antikythera mechanism check out this illuminating video from 2008, produced for Nature by a director with decades of experience at the BBC.

Follow Mims on Twitter or contact him via email.

Print Favorite Share facebook twitter
-->

Powered by WizardRSS | Best Membership Site Software

Source: http://feeds.technologyreview.com/click.phdo?i=ffe485fef743233e594cca5eb6d329e0

joseph brooks elizabeth smart story salvia effects stack overflow at line 0

Weekend Open Forum: Gadgets you can't live without

Those of you born more than a couple of decades ago can probably remember a time when cell phones, personal computers, cable television, and other modern conveniences werent something youd take for granted or outright didnt even exist. Some say those were simpler times, but as a technology lover I certainly wouldnt want to live without many of these things. Truth is, in most cases they also make life so much easier.

Digital video recorders, like TiVo, for example, changed the way a lot of people consume television programming and this continues to evolve with the proliferation of broadband connectivity and streaming content. Cell phones are much more than omnipresent communication devices -- which is already amazing by itself -- and now double as anything from real-time GPS navigation systems, to workout companions, gaming handhelds and much more.


Granted, some things are not particularly vital and others we may get just for bragging rights, but Im sure most of you own at least a handful of gadgets that are part of your everyday lives. Having just moved to another country, I had to leave some things behind and my arsenal is rather limited at the moment -- its mainly a unibody MacBook, which I dual boot between Windows 7 and Mac OS X, an iPhone 4, a flat-screen TV and a digital SLR camera. My laptop is due for an update soon, I plan to buy a speaker dock, and Im still considering if I should get a tablet sometime next year (oh yeah, and probably a nice coffee maker). How about you? What are the gadgets you absolutely cant live without?

Powered by WizardRSS | Best Membership Site Software

Source: http://www.techspot.com/news/41522-weekend-open-forum-gadgets-you-cant-live-without.html

coachella 2011 lineup bea arthur ron santo funeral david epstein

Kinect does Minority Report interface, Air Guitar prototype

Many Kinect motion controller hacks have teased a Minority Report interface, but all of them have only shown that it could be possible with a bit more work. Until now: not only is it possible, but it's been done. The Robot Locomotion Group and Learning Intelligent Systems teams at MIT have developed a system that uses the Kinect drivers for Linux to detect all ten of your fingers, along with your palms, so that you can use them to interact with a display.

"This is a graphical interface inspired by the movie 'Minority Report,' reads the video's description. "It uses the Kinect sensor from Microsoft, and the recently released libfreenect driver for interfacing with the Kinect in linux. The graphical interface and the hand detection software were written at MIT to interface with the open source robotics package 'ROS', developed by Willow Garage (willowgarage.com). The hand detection software showcases the abilities of the Point Cloud Library (PCL), a part of ROS that MIT has been helping to optimize. The hand detection software is able to distinguish hands and fingers in a cloud of more than 60,000 points at 30 frames per second, allowing natural, real time interaction."

Next up we have a prototype of an Air Guitar in action, courtesy of Chris Oshea. The prototype was written in C++ and uses the openFrameworks and openCV for image processing, the ofxKinect addon and the libfreenect driver on Mac, as well as help from the openframeworks and openkinect communities.

"First it thresholds the scene to find a person, then uses a histogram to get the most likely depth of a person in the scene," Oshea explains. "Then any pixels closer than the person to the camera are possible hands. It also uses contour extremity finding on the person blob to look for hands in situations where your hand is at the same depth as your body. It only works if you are facing the camera front on. Then it uses one hand as the neck of the guitar, drawing a virtual line from the neck through the person centroid to create the guitar line. The other hand is tracked to see if it passes through this line, strumming the guitar. The neck hand position controls the chord."

Of course, there are still limitations, but Oshea made a point to give a big thank you to Microsoft for bringing the technology to the mass market. That's exactly why the Kinect is so ground-breaking: not only is it cheap, but it's also widely available.

Powered by WizardRSS | Best Membership Site Software

Source: http://www.techspot.com/news/41525-kinect-does-minority-report-interface-air-guitar-prototype.html

coachella 2011 lineup bea arthur ron santo funeral

Will Web Apps Replace Web Sites? [TNW Media]

It seems that lately just about everyone is developing a Web application version of their site. Most recently, with the introduction of the Chrome Web Store, the shift toward more stylized, specialized function can only be expected to increase in frequency. But why?

The jovial, yet misguided answer is that all publishers want to provide their readers with content in the best way possible. While that much might be true to an extent, the larger idea is that publishers are trying very hard to find better ways to monetize that content. In order to get you to purchase an ?application? displaying the content of a website, only the very best presentation will do.

Show Me The Money

To first address the question of the topic, we believe that the answer is both no and yes. While Web apps won?t necessarily replace the website today, they are still a viable, unique option for publishers. As our own Courtney Boyd Myers answered, these value-added features of having pristine Web application could very well be another revenue stream for the publisher.

Some website owners are already catching on. USA Today, for instance, has developed an app for the Web Store that the company sees as a way to deliver more content than what is available simply through the USAToday.com site. According to VP of Digital Development for USA Today, Steve Kurtz:

I think it gives us the opportunity to execute our business strategy, whether that?s paid content or display advertising or a combination of both.

Shifting Ideas

I had a chance to pick the brain of Kate O?Neill, Founder and CEO of [meta]marketer. The Nashville-based company, which helps websites optimize every aspect of their content, had some interesting insight to the scenario. Her first question brings about a very strong point:

How far into the future are you looking?

As O?Neill points out, the present version of what we?re seeing are sites and companies that are simply adopting that which is the latest thing. However, she also brings about a point with which we firmly agree ? we?re witnessing the browser experience learning from the mobile platform.

At a time when so much of what we do is going mobile, going cloud-based and becoming accessible regardless of location, there are aspects to the application format that have to be given due attention. As O?Neill states, ?It?s a very context-driven experience. While it could be very good, it could also be very onerous for site owners to keep up with. The question is how much value is it providing to the business of the website.?

What needs to be understood about this shift, though, is that it opens a lot of doors that we?ve previously been afraid to approach. For instance, the analytics community was up in arms not long ago when Google began measuring RSS reads as page displays. However, it?s likely that Google was ahead of its own time in its measurement, because now that door will open again.

While Google did back down from the measurement, what it likely should have done, as O?Neill points out, is develop a new metric for measuring this traffic. The views that will be provided via Chrome (and other platform) apps likely need a measurement that is all their own. Not only would that help the site owners better optimize the content provided, but it also will reach into that section about monetization and enabling a better user experience.

UX for You

O?Neill?s statement about the apps being onerous to site owners led me to get in touch with Mitch Canter of StudioNashvegas. While Mitch spends the majority of his time working with WordPress, he did have some great insight into the factors that will be involved in a long-term commitment to the Web app platform.

There are a couple of things going on. First, there have to be the right protocols in place [to make the installation and purchase process seamless] and there are server issues to be resolved from a logistical hurdle [for starting your application platform].

However, Canter does see it as a long-term, viable option for sites. I asked him if he thought we?d see a lot more sites moving toward this platform in the future:

I think so. A lot of people are going to try to play within the system. You?re going to see a lot of those niche sort of markets come up as people try to take the technology and embrace it. The technology is only getting better, HTML5 will start to be more prevalent. We can start to standardize things.

That embrace, and the shift toward a ?platformless? society, should continue to open more doors as well. In speaking with Iain Dodsworth from TweetDeck, we?re offered some insight to the desires that drove the company to build the TweetDeck App for Chrome, affectionately known as ChromeDeck:

I always wanted to do a browser based TweetDeck (and it has been the #1 requested feature since day one pretty much) but had doubts as to how much engagement the product would get if it was just another tab. I knew it was the wrong approach to release a simple verison of TweetDeck (TweetDeck Lite) but we couldn?t do a full version of TD until Chrome Apps came along.

Interestingly, we?ve also been told that ChromeDeck is unlike most of the applications that you see in the Chrome Web Store. While the majority of ?apps? are simply webpages hosted on other servers, ChromeDeck was built from the ground up to run natively in the browser. There is no reliance on servers from TweetDeck itself, which takes a potential problem out of the chain for the user.

Back to the Front

So the question remains ? will Web apps replace Web sites? The answer, it seems, is three-fold.

For most sites, the app will be a value-added feature that can be monetized. Additional content can be distributed through the application and made more readily available. For others, the move to an application versus a website will be fitting, especially so if it is well-suited to the mobile lifestyle. Yet the third tier remains, in which sites will likely move from the scenario of additional feature and into application-based.

For now, the doors are wide open and it?s a brave, new world to be explored for site owners. We?re hugely excited to see what will come next.

Powered by WizardRSS | Best Membership Site Software

Source: http://thenextweb.com/media/2010/12/10/will-web-apps-replace-web-sites/

elizabeth smart story salvia effects stack overflow at line 0 railgun

Blog - A Lego Reconstruction of the World's Earliest Computer

Here's a brand new stop motion video of a reconstruction of the world's first mechanical computer, directed by occasional Technology Review contributor John Pavlus. It's entirely self-explanatory: watch it and read on.

One hundred years before the birth of Christ, when agriculture and the wheel was for most of human civilization the apex of technological achievement, the Greeks built a mechanical computer so sophisticated that it could add and subtract--all in the name of predicting the next lunar eclipse.

In 2010, Apple engineer Andrew Carol created, based on previous reconstructions of the so-called "Antikythera" mechanism, which was discovered in a shipwreck in 1901, a fully functioning Lego replica of the device. Like the original, it accurately predicts solar eclipses.

Its secrets are explained at length in a feature in Nature and its Wikipedia entry, which, not surprisingly for a device that is catnip for geeks, is as exhaustive as the plot exegeses of old episodes of Lost.

The Antikythera is such a marvelous device--Michael Edmunds of Cardiff University, who led the most recent study of the device, says it is more historically valuable than the Mona Lisa--that it continues to inspire great works of its own: first the historically faithful reconstructions of it, then the lego reconstruction, and now this stop motion video, which, like all stop motion, was an enormous effort in itself.

Here's a sped-up, behind the scenes video of the shoot:

And if you want to really dig deep, Pavlus has conducted an interview with the creator of the Lego version of the Antikythera. It includes fascinating details about managing the friction generated by the more than 100 gears in the mechanism. Here's Carol's account of how it works:

It's pretty simple; it's all about ratios between the numbers of teeth on two gears meshed together. If one gear has 50 teeth and another has 25, that's a 2-to-1 ratio -- which means that turning the axle one full revolution on the first gear will multiply by two, because it turns the second gear twice as fast.

But the tradeoff is that when you make it go fast, you lose power. It's fast, but it's not strong, and vice versa -- and those mechanical effects pile up quickly when you've got over 100 gears working together in exotic ratios. When I have to multiply by 127, it's got to turn very fast, but with little power, which means that whatever amount of friction there is, I've effectively multiplied it by 127. So I had to put a lot of thought into designing the optimal layout of gears that would minimize the friction enough to make that kind of calculation physically work.

Finally, there's Pavlus's account of how he brought the video project itself together. For anyone interested in how to explain complicated technology to a lay audience, it's quite a ride:

But how to actually execute that idea? Obviously some kind of animation would be necessary. Several people I consulted urged me to use computer graphics. But that felt wrong: Legos are wonderfully tactile, and I really wanted to highlight the machine's intricate physical detail ? to make you feel like you could literally reach out and touch the gears or turn the crank. CGI would feel too weightless and abstract ? too perfect. Andy's model was the quintessence of DIY hacking: he didn't even diagram it out before starting to build it. I needed animation that was physical, craft-ey, and a little bit rough around the edges. Stop-motion was the clear choice.

For even more background on the Antikythera mechanism check out this illuminating video from 2008, produced for Nature by a director with decades of experience at the BBC.

Follow Mims on Twitter or contact him via email.

Print Favorite Share facebook twitter
-->

Powered by WizardRSS | Best Membership Site Software

Source: http://feeds.technologyreview.com/click.phdo?i=ffe485fef743233e594cca5eb6d329e0

cyber monday deals 2010 blac chyna bowl projections big 12 championship tickets

Adobe announces Flash Player for Chrome OS

Adobe has announced the Chrome notebook Pilot program, which will see Flash Player ported to Chrome OS. Google's latest mobile operating system is yet another platform that Adobe wants its three million Flash developers to develop for. Currently, the company says that Flash Player 10.1 support is a work in progress and acceleration for video is a top priority.

In other words, Adobe is not waiting for version 10.2 to come out of beta; it is pushing forward to get it Chrome OS users onboard right off the bat with the first hardware. In fact, Chrome OS already ships with Flash Player 10.1, but it's hardly optimized just yet.

"Video performance in particular is the primary area for improvement and we are actively working with the engineers at Google to address this," an Adobe spokesperson said in a statement. "Enabling video acceleration will deliver a more seamless experience on these devices. Because Flash Player is integrated directly into Chrome Notebooks, users will automatically benefit from the latest features and improvements as new versions of the software are pushed out."

Adobe also took the opportunity to share some data around the plug-in. Flash video streaming is on the rise with more than 100 percent year-over-year growth over the past two years. Furthermore, in one month alone, 120 petabytes of video is streamed via Flash.

Powered by WizardRSS | Best Membership Site Software

Source: http://www.techspot.com/news/41519-adobe-announces-flash-player-for-chrome-os.html

juan manuel marquez vs michael katsidis steven pieper ernest borgnine cwtv

Stretchable Silicon Could Make Sports Apparel Smarter

Stretchable silicon electronics that offer the computing power of rigid chips could make their way into Reebok's athletic apparel in the coming years. The company will work with MC10, a startup maker of flexible electronics, to develop sportswear that incorporates electronics to monitor athletes' health and performance during training and rehabilitation.

Reebok and MC10, which is based in Cambridge, Massachusetts, would not provide specifics about what products are under development. Representatives say the goal of the project is to make the interface between people and their electronics disappear. "We want to bring more information to the athlete, using the [conformable electronics] technology in a way that makes the electronics invisible to the user," says Paul Litchfield, head of Reebok Advanced Concepts.

Textiles incorporating electronics are already available today, for example in sports bras that use conductive textiles to register a woman's heart rate. But today's devices connect to a box containing the heart of the electronics, which are built on rigid chips. In the bra, a removable plastic box beams a signal to a watch.

Clothing incorporating high-performance conformable electronics could have many advantages over these systems, says MC10 CEO David Icke. First, the electronics could be totally incorporated into the inside of a shirt, or into a decal placed directly on the skin, without the need for a casing. They could conform to the body, and their increased level of contact with the skin could lead to higher-quality measurements. And by incorporating transistors that can amplify and process signals for better sensitivity, the flexible electronics would deliver more-valuable information. "It's not like wearing a device with hard segments attached to the body," says Litchfield.

The athletic-apparel devices might incorporate sensors and a microprocessor to monitor many indicators of an athlete's health, such as impacts on the body, electrical information from the heart and nervous system, sweat pH, blood pressure, gait, and strain on joints. Such devices could process the data to generate information about metabolism and athletic performance and broadcast it to another device. MC10 says the products could be out within a year or two.

The researcher who cofounded MC10, University of Illinois materials science professor John Rogers, has prototyped sensors, processors, and light-emitting diodes based on silicon and built on thin, lightweight, flexible, and even stretchy materials. Like conventional silicon chips, these flexible electronics are fast and power-efficient. Other flexible electronics, based on organic semiconductors rather than silicon, tend to be slower and more power-hungry. Working with organic materials, researchers at Xerox's PARC have made printed sensor tape for the U.S. military that's mounted inside helmets to record blast strength, temperature, and other data, and includes transistors to process the data.

MC10's devices are made by etching out very thin strips of silicon and printing them onto flexible substrates. This lets them conform to uneven surfaces such as human skin. Rogers notes that other products under development by MC10 include electronics for interfacing between the body's delicate inner tissues and surgical instruments such as balloon catheters. "From the standpoint of mechanics and materials design, there are many foundational issues common to use inside and outside the body," he says.

As the performance gap between rigid chips and conformable electronics begins to close, the idea of a wearable computer begins to seem less speculative, says Juan Hinestroza, who heads the Textiles Nanotechnology Laboratory at Cornell University in Ithaca, New York. "Those were impossible dreams, but now we can produce high-performance electronics on flexible substrates," says Hinestroza, who is not affiliated with Reebok or MC10. "The interface between electronics and garments will disappear," he predicts.

Powered by WizardRSS | Best Membership Site Software

Source: http://feeds.technologyreview.com/click.phdo?i=db34da1a2b4e286d9c2b51237929a91c

ernest borgnine cwtv bee movie jenni lyn watson