الجمعة، ديسمبر 28، 2007

Google Charts API

بدايةً أعتذر عن انقطاعي خلال الفترة الماضية بسبب انشغالي والذي قارب الإنتهاء.
لا تزال جوجل تتحفنا بما هو جديدي ومثير في عالم تقنية المعلومات فقد طلقت مؤخراً واجهة برمجية سهلت إدراج الرسمات البيانية في صفحات الويب إلى درجة أنها باتت مثل إضافة image tag
نعم مثل الوصلة التالية:

http://chart.apis.google.com/chart?

chs=200x125

&chd=s:helloWorld

&cht=lc

&chxt=x,y

&chxl=0:MarAprMayJuneJuly1:50+Kb

فإضافة الرسم البياني أصبح بسهولة كتابة ترويسة الصورة في الHTML

<img src="http://chart.apis.google.com/chart?

chs=200x125

&amp;chd=s:helloWorld

&amp;cht=lc

&amp;chxt=x,y

&amp;chxl=0:MarAprMayJuneJuly1:50+Kb
"

alt="Sample chart" />

والنتيجة تظهر بهذا الشكل:



Yellow line chart with x-axis labelled with March, April, May, June, and July and y-axis with 50Kb

تعرف أكثر على هذا الAPI
http://code.google.com/apis/chart

الجمعة، يوليو 20، 2007

Web 3.0


Just in case you missed it, the web now has version numbers. Nearly three years ago, amid continued hand-wringing over the dot-com crash, a man named Dale Dougherty dreamed up something called Web 2.0, and the idea soon took on a life of its own. In the beginning, it was little more than a rallying cry, a belief that the Internet would rise again. But as Dougherty's O'Reilly Media put together the first Web 2.0 Conference in late 2005, the term seemed to trumpet a particular kind of online revolution, a World Wide Web of the people.
Web 2.0 came to describe almost any site, service, or technology that promoted sharing and collaboration right down to the Net's grass roots. That includes blogs and wikis, tags and RSS feeds, del.icio.us and Flickr, MySpace and YouTube. Because the concept blankets so many disparate ideas, some have questioned how meaningful—and how useful—it really is, but there's little doubt it owns a spot in our collective consciousness. Whether or not it makes sense, we now break the history of the Web into two distinct stages: Today we have Web 2.0, and before that there was Web 1.0.
Which raises the question: What will Web 3.0 look like?
Yes, it's too early to say for sure. In many ways, even Web 2.0 is a work in progress. But it goes without saying that new Net technologies are always under development—inside universities, think tanks, and big corporations, as much as Silicon Valley start-ups—and blogs are already abuzz with talk of the Web's next generation.
To many, Web 3.0 is something called the Semantic Web, a term coined by Tim Berners-Lee, the man who invented the (first) World Wide Web. In essence, the Semantic Web is a place where machines can read Web pages much as we humans read them, a place where search engines and software agents can better troll the Net and find what we're looking for. "It's a set of standards that turns the Web into one big database," says Nova Spivack, CEO of Radar Networks, one of the leading voices of this new-age Internet.
But some are skeptical about whether the Semantic Web—or at least, Berners-Lee's view of it—will actually take hold. They point to other technologies capable of reinventing the online world as we know it, from 3D virtual worlds to Web-connected bathroom mirrors. Web 3.0 could mean many things, and for Netheads, every single one is a breathtaking proposition. — Tim, Lucy, and The Semantic Web
Tim, Lucy, and The Semantic Web The Semantic Web isn't a new idea. This notion of a Web where machines can better read, understand, and process all that data floating through cyberspace—a concept many refer to as Web 3.0—first entered the public consciousness in 2001, when a story appeared in Scientific American. Coauthored by Berners-Lee, the article describes a world in which software "agents" perform Web-based tasks we often struggle to complete on our own.
The article begins with an imaginary girl named Lucy, whose mother has just been told by her doctor that she needs to see a specialist. "At the doctor's office, Lucy instructed her Semantic Web agent through her handheld Web browser," we read. "The agent promptly retrieved information about Mom's prescribed treatment from the doctor's agent, looked up several lists of providers, and checked for the ones in-plan for Mom's insurance within a 20-mile radius of her home and with a rating of excellent on trusted rating services."
That's quite a mouthful, but it only begins to describe Berners-Lee's vision of a future Web. Lucy's Semantic Web agent can also check potential appointment times against her mother's busy schedule, reschedule other appointments if need be, and more—all on its own, without help from Lucy. And Lucy is just one example. A Semantic Web agent could be programmed to do almost anything, from automatically booking your next vacation to researching a term paper.
How will this actually work? In Berners-Lee's view, it involves a reannotation of the Web, adding all sorts of machine-readable metadata to the human-readable Web pages we use today (see "Questions of Semantics," opposite). Six years after the Scientific American article, official standards describing this metadata are in place—including the Recourse Description Framework (RDF) and the Web Ontology Language (OWL)—and they're already trickling into real-world sites, services, and other tools. -Semantic Web metadata underpins Yahoo!'s new food site. Spivack's Radar Networks is building a kind of Semantic Web portal. A development platform, Jena, is in the works at HP. And you'll find Semantic Web structures in Oracle's Spatial database tool.
The problem is that a complete reannotation of the Web is a massive undertaking. "The Semantic Web is a good-news, bad-news thing," says R. David Lankes, an associate professor at Syracuse University's School of Information Studies. "You get the ability to do all these very complex queries, but it takes a tremendous amount of time and metadata to make that happen." — next: The Other Semantic Web
The Other Semantic WebAs a consequence, many researchers take a very different approach to the Semantic Web. Rather than calling for an overhaul of Web formats, which would involve hundreds of thousands of independent sites, they're building agents that can better understand Web pages as they exist today. They're not making the pages easier to read, they're making the software agents smarter.
One early example is the BlueOrganizer from AdaptiveBlue (http://www.adaptiveblue.com/). In certain situations, when you visit a Web page, this browser plug-in can understand what the page is about, automatically retrieving related information from other sites and services. If you visit a movie blog, for instance, and read about a particular film, it immediately links to sites where you can buy or rent that film. "It's what you might call a top-down approach," says Alex Iskold, the company's CEO. "Web pages already contain semantic data. We can understand them, so why shouldn't computers? Why not build a technology that can parse and process existing services and databases?"
Of course, that's easier said than done. Countless companies offer tools similar to BlueOrganizer—including Claria's PersonalWeb—but these aren't that different from the old Amazon.com "recommendation engine," which suggests new products based on your surfing and buying habits. We're a long way from agents that can think on their own. In the near term, the Semantic Web may require the sort of metadata Berners-Lee proposes. "Automated agents are worth striving for," says Pattie Maes, an MIT Media Lab veteran who founded the Lab's Software Agents Group. "But it's hard to say what's better—tags built into Web pages or tags that are, in a sense, inferred by machines." — next: Semantics and Search
Semantics and Search The Semantic Web, like Web 2.0, is a nebulous concept. "Considering that the very word semantic is all about meaning, it's ironic that the term Semantic Web is so ill defined," says Radar Networks' Spivack. Some, like Spivack, fall into the Berners-Lee camp. Others, like AdaptiveBlue's Iskold, believe in the artificial-intelligence method. And then there are the others: the semantic searchers.
Rather than providing automatic information retrieval, semantic search engines seek to improve on the Google-like search model we've grown so accustomed to. The idea is to move beyond mere keyword searches to a better understanding of natural-language queries. "Right now, search engines can't tell the difference between Paris Hilton and the Hilton in Paris," says Jeff Bates, cofounder of Slashdot, one of the driving forces behind Web 2.0. "There's millions of dollars being spent trying to better optimize search, and that's a big part of what the Semantic Web will be."
This kind of natural-language processing has been in development for years, but it, too, has found its way onto the public Web. Several start-ups, including Powerset and TextDigger, are hard at work on semantic search engines based on the open-source academic project WordNet. It should be noted, however, that natural-language search could very well play a role in the Berners-Lee Semantic Web. His is merely a framework to enable all sorts of apps, and semantic search might be one of them. — next: A Web Beyond Words
A Web Beyond WordsThough Web 3.0 is most often associated with the Semantic Web, the two are far from synonymous. Countless other concepts are poised to play a role in our online future, and many go beyond semantics, using space, images, and sound.
One possibility is the so-called 3D Web, a Web you can walk through. Many see this as an extension of the "virtual worlds" popping up on today's Internet. In the future, they say, the Web will be one big alternate universe reminiscent of Second Life and There.com. But others scoff at this notion, claiming it's just a less-efficient version of today's Internet. They see the 3D Web not as an alternate universe but as a re-creation of our existing world. On the 3D Web, you could take a virtual stroll through an unfamiliar neighborhood shopping for houses or visit famous sites you've never seen. Google Earth already offers an experience not far removed from this. "Today, with a service like Google Earth, you can zoom in on Seattle and see how tall the buildings are," says Syracuse University's Lankes. "It really isn't that much of a leap to actually put you, or your avatar, in Seattle and let you walk around."
The trouble is, 3D only goes so far. It doesn't enhance the very 2D world of words, pictures, and video. For many, the more interesting idea is a mediacentric Web, offering not just language-based search but pure media search. Today we depend on keywords even when searching for images, videos, and songs—a woefully inadequate system. Companies like Ojos and Polar Rose are working to reinvent media search, hinting at a world where we search for media with other media—not just keywords (see "Look Ma, No Keywords!" opposite).
Then there's the Pervasive Web, a Web that's everywhere. Today's Web already extends beyond the desktop, to cell phones and handhelds, but it might extend even further—into our everyday surroundings. At the MIT Media Lab, Maes is toying with the idea of Web-connected bathroom mirrors. As you brush your teeth in the morning, there's the latest news. Meanwhile, with his blog, the End of Cyberspace, Alex Soojung-Kim Pang of the Institute for the Future envisions the Web automating much of what goes on in the home. Your windows, for instance, could automatically open when the weather changes. With help from mesh networks—wireless networks consisting of tiny nodes that can route data to and from almost anywhere—the possibilities are nearly endless. — next: Tomorrow's Web, Today
Tomorrow's Web, TodayIn some respects, Web 3.0 is nothing more than a parlor game. Ideas tossed out here and there. But at the very least, these ideas have roots in current trends. Many companies, from HP and Yahoo! to Radar Networks, are adopting official Semantic Web standards. Polar Rose and Ojos are improving image search. Google and Microsoft are moving toward 3D. No one can predict what Web 3.0 will look like. But one thing's for sure: It'll happen. — next: An Idiot's Guide to Web 3.0
An Idiot's Guide to Web 3.0What will Web 3.0 look like? Who knows? But here are a few possibilities.
The Semantic WebA Web where machines can read sites as easily as humans read them (almost). You ask your machine to check your schedule against the schedules of all the dentists and doctors within a 10-mile radius—and it obeys.
The 3D WebA Web you can walk through. Without leaving your desk, you can go house hunting across town or take a tour of Europe. Or you can walk through a Second Life–style virtual world, surfing for data and interacting with others in 3D.
The Media-Centric WebA Web where you can find media using other media—not just keywords. You supply, say, a photo of your favorite painting and your search engines turn up hundreds of similar paintings.
The Pervasive WebA Web that's everywhere. On your PC. On your cell phone. On your clothes and jewelry. Spread throughout your home and office. Even your bedroom windows are online, checking the weather, so they know when to open and close. — next: Questions of Semantics
Questions of Semantics
Tim Berners-Lee isn't the only man behind the Semantic Web. His 2001 Scientific American article, which introduced the concept to the world, was actually written in collaboration with two other eminent -researchers, Ora Lassila and Jim Hendler. Six years on, we tracked down Professor Hendler, now director of the Joint Institute for Knowledge Discovery at the University of Maryland and still one of the driving -forces behind this next-generation Internet.
Q: Does the Semantic Web idea predate your now-famous Scientific American article—or was that the first mention?
A: That's the first time the term was coined and printed in a fairly accessible place. Recently, we've been looking for the absolute earliest use of the term Semantic Web, and it seems to go a bit further back, to a few small things Tim had written. He and some colleagues were using it locally within MIT and the surrounding community in the late nineties.
Q: The Semantic Web can be a difficult concept to grasp. How do you define it?
A: What the traditional Web does for the text documents in our lives, the Semantic Web does for all our data and information. Today, on my Web page, I can build a pointer to another Web page. But I can't link data together in the way I can link pages together. I can't point from a value in one database to some other value in some other database. To use a simple example, if your driver's license number is in one place and your vehicle identification number is in another, there should be a way of linking those two things together. There should be a way for machines to understand that those two things are related.
Q: Why is this so necessary?
A: Right now, it's very difficult to browse data on the Web. I can use a search engine that gives me the results of a query and draws them as a list, but I can't click on one of those values and see what it really means and what it's really related to. Today's social networking is trying to improve this, with things like tagging. But if you typed "polish" and I typed "polish," how do we know we're talking about the same thing? You might be talking about a language and I might be talking about something that goes on furniture. On the other hand, if those two names are precisely identified, they don't accidentally overlap and it's easier to understand the data we've published. So the technology of the Semantic Web is, in a sense, the technology of precise vocabularies.
Q: And this, in turn, would allow a machine to go out across the Web and find the things we're looking for?
A: Yes. It's very hard for this to happen with just language descriptions. Our idea is to have machine-readable information shadowing the human-readable stuff. So if I have a page that says, "My name is Jim Hendler. Here's a picture of my daughter," the machine realizes that I'm a person, that I have a first name and a last name, that I'm the father of another person, and that she's a female person. The level of information a machine needs would vary from application to application, but just a little of this could go a long way—as long as it can all be linked together. And the linking is the Web part of the Semantic Web. This is all about adding meaning to the stuff we put on the Web—and then linking that meaning together. — next: Look, Ma, No Keywords!
Look, Ma, No Keywords! Three new Web services reinvent the way we look for music and images.
You won't search for media with keywords in the future-—you'll search for media with media. To find an image, you'll supply another image. To find a song, you'll supply another song. Don't believe it? Three new services—image-crunchers Like.com and Polar Rose, and music-matchmaker Pandora—have already taken the first steps toward this new breed of media search.
Today, when you search the Web for music and images, you're merely searching for the words that surround them. When you visit Google Image Search and type in "Steve Jobs," you aren't really looking for photos of Apple's CEO. You're looking for filenames and captions that carry those keywords—"Steve" and "Jobs"—hoping the right photos are somewhere nearby.
There's a sizable difference between the two. On any given image search, Google turns up countless photos completely unrelated to your query, even as it misses out on countless others that may be a perfect match. In the end, you're relying on Web publishers to annotate their images accurately, and that's a hit-or-miss proposition.
The situation is much the same with MP3s, podcasts, and other sound files. When trolling Web-based music services, you can run a search on "Elvis" or "Jailhouse Rock." But what if you're looking for music that sounds like Elvis? Wouldn't it be nice if you could use one song to find other similar songs?
Ojos and Polar Rose are tackling the image side of the problem. Last spring, Ojos unveiled a Web-based photo--sharing tool called Riya, which automatically tags your pictures using face recognition. Rather than manually adding "Mom" tags to all your photos of Mom, you can show Riya what she looks like, and it adds the tags for you. The service is surprisingly accurate, gaining a huge following from the moment it hit the Web, but Ojos quickly realized that the Riya face-rec engine—which also identifies objects and words—could be used for Web-wide image search.
That's a mammoth undertaking, but, with an alpha service called Like.com, the company is already offering a simple prototype. Today, Like.com is little more than a shopping engine. You select a photo of a product that best represents what you're looking for, and the service shows all sorts of similar products. But it's an excellent proof-of-concept.
Meanwhile, Polar Rose (http://www.polarrose.com/) recently introduced a browser plug-in that does face recognition with any photo posted to any Web site. For the moment, it's just a means of tagging images automatically—much like Riya. But unlike Riya, it already works across the length and breadth of the Net.
The closest equivalent when it comes to audio is Pandora, from a group of "musicians and music-loving technologists" called the Music Genome Project. Since its inception in 2000, the group has analyzed songs from over 10,000 artists, carefully notating the music makeup of each track. Using this data and a list of your favorite artists, Pandora can instantly construct a new collection of songs that suit your tastes. Again, this is hardly a Web-wide search engine, and unlike the image services from Ojos and Polar Rose, it relies heavily on up-front human input. But it's a step in the right direction. True media search is closer than you think. — next: Versions 4, 5, 6...
Versions 4, 5, 6...Is it too early to talk about Web 4.0? Of course not.
According to Danish editor Jens Roland, who's been tracking the increasingly common practice of assigning version numbers to the World Wide Web, at least one Internet pundit is already discussing Web 38.0. Roland hastens to point out that this discussion is most likely tongue-in-cheek. But even as Web 2.0 continues to mature and an assortment of ideas called Web 3.0 hits our collective consciousness, some people are actually giving serious thought to version 4.0. Go ahead. Google it.
One of the first and most visible Web 4.0 pundits is Seth Godin, a technology-minded marketing guru with seven books to his name, including Unleashing the Ideavirus, billed as the most popular e-book ever. What does a marketing guru have to do with the future of Net? Everything. After all, these Web-wide version numbers have so much to do with spin.
Godin envisions Web 4.0, or Web4, as a place where you have even tighter online connections to your friends, family, and colleagues. "There are so many things the Web can do for me if it knows who my friends are, where they are, what they're doing, what they're interested in, how they can help me—and vice versa," he says.
On his future Web, if you start typing an e-mail proposing a particular business deal with Apple, a window pops up, telling you that one of your colleagues is already in talks with Apple. If you miss an airplane flight and book a new one with your cell phone, it automatically sends messages to the friends you're meeting for dinner, letting them know you'll be late. It sounds a lot like the Semantic Web—with less privacy. Will this actually happen? Will people relinquish that much information about their private lives? Who knows? It's just an idea. Of course, people like Seth Godin know a thing or two about spreading ideas.

This article has been copied exactly from
PC MAGAZINE. "Web 3.0." March 3, 2007.
URL: http://www.pcmag.com/article2/0,1895,2102852,00.asp

I wished that I could add some value to this article but it seems that I am too engaged, so you can at least benefit reading it as it is.


الاثنين، يونيو 11، 2007

Google street view وما هو خلف الستار


قامت جوجل مؤخراً بإطلاق صاروخ تكنولوجي جديد أطلقت عليه Google Street View ، هذه الخدمة التي ما تزال محدودة ببعض المناطق في الولايات المتحدة الأميركية أثارت إعجاب الكثيرين ممن تدهشهم جوجل بعجائبها ( ومن بين هؤلاء أنا أقف متعجباً أيضاً ). هذا المشروع الذي يهدف إلى إعطاء صورة حية عن مناطق العالم والتي تجعل الشخص وكأنه يتجول هو بنفسه في بلد آخر و خصوصاً مع خاصية الإستدارة بمقدار 360 درجة التي توفرها الكاميرا التي استخدمتها جوجل في التقاط الصور. من الجدير بالذكر أن المشروع كلف جوجل لحد الآن 26 بليون دولار أمريكي. أما ما هو خلف الستار فهو بالتأكيد كاميرا مشابهه لـ Dodeca2360 التي تنتجها شركة Immersive Media . هذه الكاميرا قادرة ليس فقط على توفير صور دورانية ( تمكنك من الدوران 360 درجة حول مركز الدوران بشكل أفقي ) وإنما هي قادرة على أن توفر فيديو دوراني أيضاً، بمعنى أنك تستطيع تشغيل فيديو لسيارة تمر بين الطرقات ( هذا الفيديو بالتأكيد تم إلتقاطه بكاميرا Dodeca2360 ) وتنظر من حولك وكأنك أنت تتجول في هذه السيارة. ولمن لم يتمكن من فهم الوصف فهذه الوصلة كفيلة بالوصف:
في الفلم الموجود في هذه الوصلة ما عليك سوى الضغط والسحب بالفأرة لتتمكن من تغيير وجهة الكاميرا ولرؤية الفيديو من زاوية أخرى ( حول المركز ). هذه الكاميرا قادرة على التحرك أفقياً بحرية كاملة مما يعني دوران مقداره 360 درجة، أما عمودياً فهي قادرة على التحرك بمقدار 290 درجة وهذا بسبب المنصة التي تثبت هذه الكاميرا فوق سطح السيارة. وهي أيضاً تعطي وضوحاً بمقدار 100 مليون بيكسل في الثانية ما يعني أنها تنتج فيديو أوضح حتى من تقنية HDTV. وهي مصممة للعمل في الظروف الصعبة وقادرة على العمل تحت الماء.
هذه الكاميرا هي جزء من حزمة متكاملة تقدمها شركة Immersive Media مكونة من:
1- الكاميرا
2- وحدة المعالجة الأساسية Base Unit ( تقوم بدمج الصور المتزامنة وضغطها آنياً )
3- البرامج التي تساعد على نشر الأفلام المنتجة
كما أن شركة Immersive Media مكنت هذه التكنولوجيا من الإندماج مع برامج نظم المعلومات الجغرافية كـ Google Maps.


المصادر:
1- The camera behind Google's Street View
2- Immersive Media


السبت، يونيو 09، 2007

تزامناً مع تركنا لاستخدام أسلاك الشبكات الحاسوبية سنتوقف عن استخدام أسلاك الكهرباء قريباً



كشفت صحيفة الديلي ميل Daily Mail الإنجليزية يوم أمس نجاح مجموعة من البحاثة في معهد ماساتشوستس للتكنولوجيا MIT في نقل الكهرياء بشكل لا سلكي مسافة 7 أقدام، في عرض قاموا اثنائه بتشغيل مصباح ذا 60 واط . جائت هذه النقلة في وقت مميز ومهم إذ أن العالم يشهد اليوم تحولاً مهماً على مستوى شبكات الحاسوب، فهي تتحرك بخطى حثيثة نحو عالم لا سلكي. وأكد البحاثة على أن ما وصلوا إليه سيجعلنا مستقبلاً في غنى عن استخدام البطاريات في الأجهزة المحمولة. فهل ولى عصر السنترينو ؟؟!!




أطلق العلماء على هذه التكنولوجيا اسم "WiTricity" والتي تقوم بالأساس على ظاهرة الحث الكهرومغناطيسي المستخدم في المحركات والمحولات الكهربائية. إلا أن الإنجاز هو في المسافة، إذ أن الحث الكهرومغناطيسي يحدث في المحولات والمحركات الكهربائية في مسافات قصيرة. وهذا الجانب يثير أمراً آخر بدا التساؤل حوله واجباً كلما تحدثنا حول التقنيات اللاسلكية، هل هذه التقنية آمنة صحياً؟؟ أم أنها قد تسبب مشاكل صحية للبشر أو حتى للحيوانات؟؟ هذه تساؤلات طرحت وأثارت لغطاً كبيراً لكنها حتى الآن لم تثبت خطورت التقنيات اللاسلكية على الكائنات الحية التي تتأثر بالموجات الكهرومغناطيسية والراديوية المستخدمة في هذه التقنيات.


المصادر:
1- صحيفة الديلي ميل

2- أخبار البي بي سي

الأحد، يونيو 03، 2007

عندما يصبح سطح المكتب حقيقة وليس مجرد اسم

آخر صرعات ميكروسوفت ..... سطح مكتب تفاعلي بالكامل، هل ستحدث مايكروسوفت نقلة بين ما نتخيله كخيال علمي وبين ما هو واقع؟؟


اطلع على تفاصيل
Microsoft Surface من الموقع التالي

http://www.microsoft.com/surface/



تجول بين شوارع أميركا مع Google Street View

يبدو أنه لا حدود لخطط جوجل ... اليوم نتجول على مستوى الشوارع وغداً ربما نغوص في البحر مع جوجل