How Design Software Will Shape Manufacturing's Future

Autodesk, a multinational software company based in San Rafael, California, makes 3-D design software used by everyone from automotive manufacturing giants to Hollywood studios. Now it is betting that those digital tools will have an increasingly powerful role in what happens on factory floors, enabling manufacturers to embrace more flexible strategies that deliver more customized products.

Buzz Kross, who heads the company's manufacturing industry group, says the manufacturers he works with see an opportunity in new technology at a time when they sense that the boom in outsourcing to China has run its course. "There have always been companies that differentiate based on their ability to manufacture most efficiently, and others based on design and invention?it's the difference between GM and Tesla," says Kross. "Now a lot of manufacturers are leaning more to the design model."

Kross says that rising costs in China's maturing economy and high-profile problems with out-sourced  components, like those that plagued Boeing's 787, are making the model of high-volume, low-cost outsourced production less economically attractive. The result is that a wider range of companies are considering adopting a more flexible, premium approach to manufacturing that has previously been limited to a relatively small niche. Kross is trying to help that trend along with software such as Inventor, which provides a way to digitally prototype and test mechanical designs, and Streamline, which enables engineers, designers, and managers to collaborate on a design. Both are intended to speed the journey from digital drawing board to factory floor.

"You don't need to center everything on making millions of the same thing at the absolute cheapest price anymore," says Kross. He cites the growing popularity of a model known as ETO (engineer to order), in which businesses buying from manufacturers order by referring to a list of general rules, not a catalogue and price list. For each order, a manufacturer makes and assembles a product very specific to the customer's needs. That approach also cuts costs, because raw materials and parts don't have to be held in stock; rather, they can be purchased to match the latest order. And the customized products can command a higher price than a conventionally made one, Kross says: "These companies capture a larger share of the customer's wallet this way."

That style of manufacturing makes the design process?and design software?much more central. Kross says that 3-D printing technology will blur the line between design and manufacturing still further.

"Everybody's already embracing it for prototyping," says Kross. "You can already print moving components and subassemblies that don't need any assembly. That's incredibly useful, whether you make pumps or power trains or chairs." Nike, an Autodesk customer, prototypes shoes by using a printer to squirt out materials that have more or less compressibility, depending on how bouncy and flexible each part of the sole is meant to be.

The next step is for 3-D printing to become a manufacturing method rather than solely a prototyping tool, says Kross. Small companies are already trying this, but it won't be long before large manufacturers follow suit. "Think about when you buy a Dell computer and they let you choose all the different components," Kroll says. "3-D printing for manufacturing will allow you to have that, but with nearly infinite options."

This process may cost manufacturers more than production at a more conventional or offshore factory. But as with the ETO approach, more customized products fetch higher prices, says Kroll. Jewelry, furniture, and consumer electronics are all areas that could benefit from the new techniques, he says. "People don't like it when they have the same thing as everything else and will pay more to get exactly what they choose."

Powered By WizardRSS.com | Full Text RSS Feed | Amazon Plugin | Settlement Statement | WordPress Tutorials

Source: http://feeds.technologyreview.com/click.phdo?i=d75610f7faca60c9802753af71fd1fcc

the chipmunks seattle public schools worldstarhiphop the game season 4 episode 1 freddie mitchell simon chipmunk lebron james twitter jimmer fredette thomas tew rum issaquah school district

Three-Dimensional Photonic Crystals Shine

For the first time, researchers have made high-quality three-dimensional photonic crystals and used them to make a highly efficient light-emitting diode (LED). Three-dimensional photonic crystals promise to boost the performance of just about any optical device, be it a display, a solar cell, or an efficient lightbulb?but until now, no one had been able to make them using commercially viable methods or workable materials. Researchers at the University of Illinois at Urbana-Champaign are now working on solar cells based on the structures.

Photonic crystals can control the absorption, emission, and movement of light in a very precise way based on their structure. They've been a hot area of research since the late 1980s. So far, it's only been practical to make flat, two-dimensional photonic crystals. These control the movement of light very well in two dimensions, but not perfectly in the third. Still, they've been very successful. A company called Luxtera, for example, has developed ways of building photonic-crystal-based optical interconnects directly onto computer chips. Bringing optical signals closer to computer processors helps speed data transmission, and using photonic crystals helps keep the size of these links compact. Luminus has focused on LEDs, for which the crystals help improve light output, making these devices brighter and more power-efficient.

However, three-dimensional photonic crystals would make even better optical devices. "The key advantage is, you can really control the propagation of light in all dimensions," says Paul Braun, professor of materials science and engineering at the University of Illinois. Braun is leading the work on three-dimensional photonic crystals, and his group is also working on making solar cells from the crystals.

Making these structures is tricky. Photonic crystal structures vary, but they're often made by drilling nanoscale holes, rods, and other features into a material. Patterning a flat slab of material with the necessary nanoscale structures to make a two-dimensional photonic crystal is a relatively simple process. It's far more difficult to get that kind of patterning into a thick chunk of material to make a three-dimensional structure without degrading the material. And the kinds of photonic crystals that are most useful?those that can actively convert between electrical signals and optical ones, in addition to precisely manipulating the flow of light?are the hardest to make because material flaws are introduced during the process. This light-to-electricity and back conversion is critical in LEDs, solar cells, and optical data interconnects for computing.

Powered By WizardRSS.com | Full Text RSS Feed | Amazon Plugin | Settlement Statement | WordPress Tutorials

Source: http://feeds.technologyreview.com/click.phdo?i=508ce69245ef4a38183da616135a7137

issaquah school district the game tv show lasso of truth terrence j most popular thanksgiving side dish the game bet lights out nyc school closings scelestious stephanie seymour and son

Three-Dimensional Photonic Crystals Shine

For the first time, researchers have made high-quality three-dimensional photonic crystals and used them to make a highly efficient light-emitting diode (LED). Three-dimensional photonic crystals promise to boost the performance of just about any optical device, be it a display, a solar cell, or an efficient lightbulb?but until now, no one had been able to make them using commercially viable methods or workable materials. Researchers at the University of Illinois at Urbana-Champaign are now working on solar cells based on the structures.

Photonic crystals can control the absorption, emission, and movement of light in a very precise way based on their structure. They've been a hot area of research since the late 1980s. So far, it's only been practical to make flat, two-dimensional photonic crystals. These control the movement of light very well in two dimensions, but not perfectly in the third. Still, they've been very successful. A company called Luxtera, for example, has developed ways of building photonic-crystal-based optical interconnects directly onto computer chips. Bringing optical signals closer to computer processors helps speed data transmission, and using photonic crystals helps keep the size of these links compact. Luminus has focused on LEDs, for which the crystals help improve light output, making these devices brighter and more power-efficient.

However, three-dimensional photonic crystals would make even better optical devices. "The key advantage is, you can really control the propagation of light in all dimensions," says Paul Braun, professor of materials science and engineering at the University of Illinois. Braun is leading the work on three-dimensional photonic crystals, and his group is also working on making solar cells from the crystals.

Making these structures is tricky. Photonic crystal structures vary, but they're often made by drilling nanoscale holes, rods, and other features into a material. Patterning a flat slab of material with the necessary nanoscale structures to make a two-dimensional photonic crystal is a relatively simple process. It's far more difficult to get that kind of patterning into a thick chunk of material to make a three-dimensional structure without degrading the material. And the kinds of photonic crystals that are most useful?those that can actively convert between electrical signals and optical ones, in addition to precisely manipulating the flow of light?are the hardest to make because material flaws are introduced during the process. This light-to-electricity and back conversion is critical in LEDs, solar cells, and optical data interconnects for computing.

Powered By WizardRSS.com | Full Text RSS Feed | Amazon Plugin | Settlement Statement | WordPress Tutorials

Source: http://feeds.technologyreview.com/click.phdo?i=508ce69245ef4a38183da616135a7137

simon chipmunk lebron james twitter jimmer fredette thomas tew rum issaquah school district the game tv show lasso of truth terrence j most popular thanksgiving side dish the game bet

Developer releases stats on most common domain typos

Earlier this year, developer Christopher Finke began collecting anonymous opt-in usage data from his "URL Fixer" browser add-on -- software that automatically corrects common typos made by users when entering a Web address. With six months of data under his belt, Finke has decided to share some of the stats, showing the most popular URL entries and typos.

Unsurprisingly, the most commonly typed domain is facebook.com, which accounts for 9% of all URL entries among URL Fixer users. Google.com was a distant second, representing 3.3% of all domain entries though it's worth noting that many people don't actually visit Google's search page because modern browsers have some degree of search engine integration.

Other Google sites -- YouTube and Gmail -- accounted for 3.3% and 1.7% (1.1% for gmail.com and 0.6% for mail.google.com). Twitter.com was the fifth most popular domain with 1.1%, followed by yahoo.com and hotmail.com at 0.6%, and amazon.com and redit.com at 0.5%. Meanwhile, .com is the most popular TLD with 63% of the entries, versus 4% for .net, .org and .de.

The most frequently mistyped addresses are "faceboook.com" (three o's) and "goole.com" -- though they don't happen very often. Faceboook.com is only entered once for every 7,930 times that someone types the site's address correctly. The stats show a near even divide between folks who enter Web addresses with "www." and those who don't (49.5% versus 50.5%).

The most frequently mistyped TLDs are .com\, .ocm, .con, .cmo, .copm, .xom, ".com,", .vom, .comn, .com', ".co,", .comj, .coim, .cpm, .colm, .conm, and .coom. It's worth noting that the data doesn't include bookmarked links or links that users click on -- only domains that have been typed into the address bar. Head over to Finke's blog for additional figures and charts.

Powered By WizardRSS.com | Full Text RSS Feed | Amazon Plugin | Settlement Statement | WordPress Tutorials

Source: http://www.techspot.com/news/44848-developer-releases-stats-on-most-common-domain-typos.html

most popular thanksgiving side dish the game bet lights out nyc school closings scelestious stephanie seymour and son david nelson the chipmunks seattle public schools worldstarhiphop

New Language for Programming in Parallel

A new programming language has been designed to get the most out of the latest multicore computer processors. If it finds favor among coders, it could provide more powerful software for many computers.

Over the last few years, as they've run up against the physical limits of miniaturization, microchip makers have shifted from increasing the power of processor cores?the part of a chip that handles data and instructions?to adding more cores to a single chip. For example, Intel's i3 and i7 processors have two and four cores, respectively.

This presents a challenge for programmers. Since most programming languages were designed for single-core chips, it can be tricky to divide tasks up and send them to each core in parallel. If a coder isn't careful, this can cause errors in the way that each core in the chip accesses the shared sections of memory.

Tucker Taft, the chief technology officer and chairman of the Boston-based software company SofCheck, designed the new language?called Parallel Specification and Implementation Language (ParaSail)?specifically for writing software for multicore processors. The language is intended to avoid the pitfalls that typically happen when working with multicore chips.

To a programmer, ParaSail looks like a modified form of C or C++, two leading languages. The difference is that it automatically splits a program into thousands of smaller tasks that can then be spread across cores?a trick called pico-threading, which maximizes the number of tasks being carried out in parallel, regardless of the number of cores. ParaSail also does the debugging automatically, which makes code safer. "Everything is done in parallel by default, unless you tell it otherwise," Taft says.

Over the next decade, the number of cores on computer chips is expected to increase even further. "There are some machines out there with dozens or hundreds of cores now," says Taft.

ParaSail uses a number of other tricks, some that draw on languages developed in the late 1980s and early 1990s for supercomputers?machines running many individual computer chips networked together. "The design of the language itself is essentially complete," says Taft, who presented details of the language on Wednesday at the O'Reilly Open Source Convention. "The first version of the compiler will be released in the next month or so." The language will work on Windows, Mac, and Linux computers.

Microsoft and Intel are putting $20 million into adapting existing languages for multicore processors, so it's difficult to say if ParaSail will become widely adopted. "There are a lot of people chipping away at the problem, taking existing languages and trying to make them better at handling parallel processing," says Taft.

Taft already has a proven track record in the world of computer language development, says Denis Nicole of the Dependable Systems and Software Engineering Group at Southampton University. But he adds that "it usually takes companies the size of Sun to push new languages on the community." 

Powered By WizardRSS.com | Full Text RSS Feed | Amazon Plugin | Settlement Statement | WordPress Tutorials

Source: http://feeds.technologyreview.com/click.phdo?i=5c82cee4ed37f2aea894934a5748e26e

thomas tew rum issaquah school district the game tv show lasso of truth terrence j most popular thanksgiving side dish the game bet lights out nyc school closings scelestious

Developer releases stats on most common domain typos

Earlier this year, developer Christopher Finke began collecting anonymous opt-in usage data from his "URL Fixer" browser add-on -- software that automatically corrects common typos made by users when entering a Web address. With six months of data under his belt, Finke has decided to share some of the stats, showing the most popular URL entries and typos.

Unsurprisingly, the most commonly typed domain is facebook.com, which accounts for 9% of all URL entries among URL Fixer users. Google.com was a distant second, representing 3.3% of all domain entries though it's worth noting that many people don't actually visit Google's search page because modern browsers have some degree of search engine integration.

Other Google sites -- YouTube and Gmail -- accounted for 3.3% and 1.7% (1.1% for gmail.com and 0.6% for mail.google.com). Twitter.com was the fifth most popular domain with 1.1%, followed by yahoo.com and hotmail.com at 0.6%, and amazon.com and redit.com at 0.5%. Meanwhile, .com is the most popular TLD with 63% of the entries, versus 4% for .net, .org and .de.

The most frequently mistyped addresses are "faceboook.com" (three o's) and "goole.com" -- though they don't happen very often. Faceboook.com is only entered once for every 7,930 times that someone types the site's address correctly. The stats show a near even divide between folks who enter Web addresses with "www." and those who don't (49.5% versus 50.5%).

The most frequently mistyped TLDs are .com\, .ocm, .con, .cmo, .copm, .xom, ".com,", .vom, .comn, .com', ".co,", .comj, .coim, .cpm, .colm, .conm, and .coom. It's worth noting that the data doesn't include bookmarked links or links that users click on -- only domains that have been typed into the address bar. Head over to Finke's blog for additional figures and charts.

Powered By WizardRSS.com | Full Text RSS Feed | Amazon Plugin | Settlement Statement | WordPress Tutorials

Source: http://www.techspot.com/news/44848-developer-releases-stats-on-most-common-domain-typos.html

scelestious stephanie seymour and son david nelson the chipmunks seattle public schools worldstarhiphop the game season 4 episode 1 freddie mitchell simon chipmunk lebron james twitter

Gaming 29 - The Post-Pub Podcast

Gaming 29 - The Post-Pub Podcast

Posted on 17th Jul 2011 at 08:23 by Podcast with 14 comments

Custom PC veteran Phil Hartup and PC Pro's Mike Jennings join Joe and Paul for a late-night, post-pint rant. This episode of the podcast, perhaps because it's sponsored by alcohol, stumbles along with vague coherency through topics such as BioShock Infinite and Just Cause 2.

Mass Effect 2 is obligatorily drawn into the discussion too, as is tradition.

Boozy fumes aren't enough to stop us tackling the thorny issues, however - Phil explains why he expects Battlefield 3 will be a shoddy console port, while Joe shoots down the defence that 64-player multiplayer is something to be proud of.

*hic*


On top of that, Phil brings us a report on how APB: Reloaded is faring after being brought back from the dead, while Joe orates further on his favourite topic of the moment; Frozen Synapse.

As always, we've also got our weekly competition, which this time gives you a chance to win yourself a copy of Assassin's Creed: Brotherhood on the PC and Raving Rabbids on the Nintendo 3DS. You can also find out who won the last competition and bagged themselves a Roccat Vire Gaming Headset.

As ever, the bit-tech hardware podcast features music by Brad Sucks, and was recorded on Shure microphones. You can download the podcast direct, listen in-browser or subscribe through iTunes using the links below. Also, be sure to let us know your thoughts about the discussion in the forums.

Powered By WizardRSS.com | Full Text RSS Feed | Amazon Plugin | Settlement Statement | WordPress Tutorials

Source: http://feedproxy.google.com/~r/bit-tech/blog/~3/-4ayxHYG6jU/

seattle public schools worldstarhiphop the game season 4 episode 1 freddie mitchell simon chipmunk lebron james twitter jimmer fredette thomas tew rum issaquah school district the game tv show

Three-Dimensional Photonic Crystals Shine

For the first time, researchers have made high-quality three-dimensional photonic crystals and used them to make a highly efficient light-emitting diode (LED). Three-dimensional photonic crystals promise to boost the performance of just about any optical device, be it a display, a solar cell, or an efficient lightbulb?but until now, no one had been able to make them using commercially viable methods or workable materials. Researchers at the University of Illinois at Urbana-Champaign are now working on solar cells based on the structures.

Photonic crystals can control the absorption, emission, and movement of light in a very precise way based on their structure. They've been a hot area of research since the late 1980s. So far, it's only been practical to make flat, two-dimensional photonic crystals. These control the movement of light very well in two dimensions, but not perfectly in the third. Still, they've been very successful. A company called Luxtera, for example, has developed ways of building photonic-crystal-based optical interconnects directly onto computer chips. Bringing optical signals closer to computer processors helps speed data transmission, and using photonic crystals helps keep the size of these links compact. Luminus has focused on LEDs, for which the crystals help improve light output, making these devices brighter and more power-efficient.

However, three-dimensional photonic crystals would make even better optical devices. "The key advantage is, you can really control the propagation of light in all dimensions," says Paul Braun, professor of materials science and engineering at the University of Illinois. Braun is leading the work on three-dimensional photonic crystals, and his group is also working on making solar cells from the crystals.

Making these structures is tricky. Photonic crystal structures vary, but they're often made by drilling nanoscale holes, rods, and other features into a material. Patterning a flat slab of material with the necessary nanoscale structures to make a two-dimensional photonic crystal is a relatively simple process. It's far more difficult to get that kind of patterning into a thick chunk of material to make a three-dimensional structure without degrading the material. And the kinds of photonic crystals that are most useful?those that can actively convert between electrical signals and optical ones, in addition to precisely manipulating the flow of light?are the hardest to make because material flaws are introduced during the process. This light-to-electricity and back conversion is critical in LEDs, solar cells, and optical data interconnects for computing.

Powered By WizardRSS.com | Full Text RSS Feed | Amazon Plugin | Settlement Statement | WordPress Tutorials

Source: http://feeds.technologyreview.com/click.phdo?i=508ce69245ef4a38183da616135a7137

freddie mitchell simon chipmunk lebron james twitter jimmer fredette thomas tew rum issaquah school district the game tv show lasso of truth terrence j most popular thanksgiving side dish

New Language for Programming in Parallel

A new programming language has been designed to get the most out of the latest multicore computer processors. If it finds favor among coders, it could provide more powerful software for many computers.

Over the last few years, as they've run up against the physical limits of miniaturization, microchip makers have shifted from increasing the power of processor cores?the part of a chip that handles data and instructions?to adding more cores to a single chip. For example, Intel's i3 and i7 processors have two and four cores, respectively.

This presents a challenge for programmers. Since most programming languages were designed for single-core chips, it can be tricky to divide tasks up and send them to each core in parallel. If a coder isn't careful, this can cause errors in the way that each core in the chip accesses the shared sections of memory.

Tucker Taft, the chief technology officer and chairman of the Boston-based software company SofCheck, designed the new language?called Parallel Specification and Implementation Language (ParaSail)?specifically for writing software for multicore processors. The language is intended to avoid the pitfalls that typically happen when working with multicore chips.

To a programmer, ParaSail looks like a modified form of C or C++, two leading languages. The difference is that it automatically splits a program into thousands of smaller tasks that can then be spread across cores?a trick called pico-threading, which maximizes the number of tasks being carried out in parallel, regardless of the number of cores. ParaSail also does the debugging automatically, which makes code safer. "Everything is done in parallel by default, unless you tell it otherwise," Taft says.

Over the next decade, the number of cores on computer chips is expected to increase even further. "There are some machines out there with dozens or hundreds of cores now," says Taft.

ParaSail uses a number of other tricks, some that draw on languages developed in the late 1980s and early 1990s for supercomputers?machines running many individual computer chips networked together. "The design of the language itself is essentially complete," says Taft, who presented details of the language on Wednesday at the O'Reilly Open Source Convention. "The first version of the compiler will be released in the next month or so." The language will work on Windows, Mac, and Linux computers.

Microsoft and Intel are putting $20 million into adapting existing languages for multicore processors, so it's difficult to say if ParaSail will become widely adopted. "There are a lot of people chipping away at the problem, taking existing languages and trying to make them better at handling parallel processing," says Taft.

Taft already has a proven track record in the world of computer language development, says Denis Nicole of the Dependable Systems and Software Engineering Group at Southampton University. But he adds that "it usually takes companies the size of Sun to push new languages on the community." 

Powered By WizardRSS.com | Full Text RSS Feed | Amazon Plugin | Settlement Statement | WordPress Tutorials

Source: http://feeds.technologyreview.com/click.phdo?i=5c82cee4ed37f2aea894934a5748e26e

david nelson the chipmunks seattle public schools worldstarhiphop the game season 4 episode 1 freddie mitchell simon chipmunk lebron james twitter jimmer fredette thomas tew rum

DRAM market remains weak, 2GB DDR3 modules to hit $10

Remember how we said RAM prices probably wouldn't get much cheaper in the near future? Well, that was very wrong. Citing anonymous industry sources, DigiTimes reports today that Kingston has cut deals to sell its 2GB DDR3 modules for as cheap as $11 (yes, that's eleven bucks), and that move has prompted competing firms lower their prices as far as $10. By offering lower prices, DRAM manufacturers expect shipments to rise, but DigiTimes' sources believe the overall demand will remain weak.

Based on DRAMeXchange's stats, contract prices have plunged more than 15% during July and that's expected to continue through August. Average contract prices for 2GB DDR3 modules fell 9.4% to $14.50 in the second half of this month while 4GB modules dipped a similar 9.7% to $28. At the same time, 1Gb and 2Gb chips were $0.75 and $1.59. Etail prices via Newegg are hovering around $25 to $30 for budget 2x2GB DDR3 1333MHz kits, while individual 2GB modules are between $15 and $20.

Last Tuesday, iSuppli released a report claiming DRAM prices would continue to fall -- albeit increasingly slower. "Following a drop of 14.2% in the first quarter of 2011, the global average decline in pricing for DRAM slowed to 12% in the second quarter. The rate of decrease is expected to decline to 9% in the third quarter and then dwindle to just 4% in the fourth quarter. The rate of decrease will further slow to just 1% in the first quarter of 2012, and then remain in the 3 to 4% range during the rest of 2012."

In a separate report released last month, the research firm said DDR4 DRAM would arrive in 2014 and rapidly eclipse DDR3 sales by 2015. Although DDR3 will remain relevant for at least a year or two following the launch of DDR4 modules, the newer technology is expected to represent some 56% of the market only one year after it hit shelves. That's a significantly faster adoption rate than witnessed with DDR3, which took two years to achieve 24% of the market and three years before it finally outgrew DDR2.

Powered By WizardRSS.com | Full Text RSS Feed | Amazon Plugin | Settlement Statement | WordPress Tutorials

Source: http://www.techspot.com/news/44846-dram-market-remains-weak-2gb-ddr3-modules-to-hit-10.html

lights out nyc school closings scelestious stephanie seymour and son david nelson the chipmunks seattle public schools worldstarhiphop the game season 4 episode 1 freddie mitchell