Wednesday, October 14, 2009

An Interview With Brian Kernighan, Co-Developer of AWK and AMPL

Brian Kernighan—a contributor to the development of the AWK and AMPL programming languages—says that he remains "very interested" in domain-specific languages as well as tools that ease the writing of code. "Programming today depends more and more on combining large building blocks and less on detailed logic of little things, though there's certainly enough of that as well," he notes. "A typical programmer today spends a lot of time just trying to figure out what methods to call from some giant package and probably needs some kind of IDE like Eclipse or XCode to fill in the gaps. There are more languages in regular use and programs are often distributed combinations of multiple languages. All of these facts complicate life, though it's possible to build quite amazing systems quickly when everything goes right." Kernighan points to an increase in scalable systems, and businesses that he thinks are making significant societal contributions include Google, through its wide scale access to a vast corpus of information. Kernighan observes that "for better or worse, the driving influence today [behind contemporary computing] seems to be to get something up and running and used via the Internet, as quickly as possible." However, he says that approach "only works because there is infrastructure: Open source software like Unix/Linux and GNU tools and Web libraries, dirt-cheap hardware, and essentially free communications."


http://www.computerworld.com.au/article/321082/an_inteview_brian_kernighan_co-developer_awk_ampl

The Web's Inventor Regrets One Small Thing

Governments around the world have put more of their data on the Web this year than previous years, and the United States and Britain have led the way, said Sir Tim Berners-Lee in an interview at a recent symposium on the future of technology in Washington, D.C. Berners-Lee, who is currently a professor at the Massachusetts Institute of Technology and director of the World Wide Web Consortium, is enthusiastic about having traffic, local weather, public safety, health, and other data in raw form online. People will create exciting applications once the data and online tools are available, he said. For example, a simple mash-up that combines roadway maps with bicycle accident reports could help bikers determine the most dangerous roads. "Innovation is serendipity, so you don't know what people will make," he said. "But the openness, transparency, and new uses of the data will make government run better, and that will make business run better as well." With regard to any regrets about the Web, Berners-Lee said that using the double slash "//" after the "http:" in Web addresses turned out to be unnecessary.

http://bits.blogs.nytimes.com/2009/10/12/the-webs-inventor-regrets-one-small-thing/

Wednesday, September 2, 2009

Netflix method for security?

Filtering Network Attacks With a 'Netflix' Method
Dark Reading (08/28/09) Higgins, Jackson

University of California, Irvine (UC Irvine) researchers have developed a new method for blacklisting spam, distributed denial-of-service attacks, worms, and other network attacks. The predictive blacklisting method, which was inspired by Netflix's moving ratings-recommendation system, uses a combination of factors to improve blacklisting, including trends in the times of attacks, geographical locations and IP address blocks, and any connections between the attacker and the victim, such as if an attacker has previously challenged the victim's network. UC Irvine professor Athina Markopoulou says the predictive blacklisting method "formalizes the blacklisting problem" in regards to predicting the sources of attacks. The researchers found that their method improves predictive blacklisting, accurately predicting up to 70 percent of attacks. "The hit-count of our combined method improves the hit-count of the state of the art for every single day," Markopoulou says. She says the method could be applied to security logs gathered by firewalls, for example, helping an enterprise better defend itself against attacks. The researchers tested their algorithms using hundreds of millions of logs from hundreds of networks, gathered over a one-month period. Markopoulou says the next step is to improve the prediction rate and to understand how attackers could evade the prediction method.

Full Article @
http://www.darkreading.com/security/perimeter/showArticle.jhtml?articleID=219500483&cid=nl_DR_DAILY_H

Internet Searches much Faster with new System

Faster Searches Key to a Greener Web University of Glasgow (United Kingdom) (08/31/09) Forsyth, Stuart

University of Glasgow researchers created a system using field programmable gate arrays (FPGAs) to search a document index 20 times faster than a system based on standard processors. The researchers plan to develop the system for use in Web servers to speed up Internet searches, which they say would reduce the Internet's energy consumption and carbon cost. Estimates for the amount of carbon dioxide generated by a single Internet search request range from 0.2g per search, according to Google, to 7g per search, according to Harvard University physicist Alex Wisser-Gross. "Few people stop to think about the carbon costs of their computing," says project researcher Wim Vanderbauwhede. "By making Internet searches faster, servers will use less energy to produce results, even if the power consumption of the actual equipment is the same because they will use that energy for a fraction of the time." The researchers found that a system of two Xilinx FPGAs running the information retrieval and filtering algorithms for a document database were 20 times faster in returning results than a dual-core Intel Itanium-2 processor, and consumed only 1.25 watts each compared to the 130 watts consumed by the Itanium processor. The researchers plan to improve the performance of the current prototype and test it in a data center environment.

Full Article @
http://www.gla.ac.uk/news/headline_128603_en.html

SDSC Dashes Forward with New Flash Memory Computer System

The University of California, San Diego's (UCSD's) San Diego Supercomputer Center (SDSC) recently unveiled Dash, a flash memory-based supercomputer designed to accelerate research in a variety of data-intensive science problems. Dash is part of the Triton Resource, an integrated data-intensive resource that was launched earlier this summer for use by the University of California system. Dash, which has a peak speed of 5.2 teraflops, is the first supercomputer to use flash memory technology in a high-performance computing system. "Dash's use of flash memory for fast file-access and swap space--as opposed to spinning disks that have much slower latency or I/O times--along with vSMP capabilities for large shared memory will facilitate scientific research," says SDSC's Michael Norman. "Today's high-performance instruments, simulations, and sensor networks are creating a deluge of data that presents formidable challenges to store and analyze; challenges that Dash helps to overcome." SDSC's Allan Snavely says Dash is capable of performing random data access one order-of-magnitude faster than other machines, allowing it to solve data-mining problems that are looking for the "needle in the haystack" up to 10 times faster than even larger supercomputers that use spinning disk technology.

Full Article @
http://ucsdnews.ucsd.edu/newsrel/supercomputer/09-09HPC.asp