Semantic Web & It s Application

Abstract:-
The World Wide Web is a universal information space. As a medium for human exchange, it is becoming mature, but we are just beginning to build a space where automated agents can contribute—just beginning to build the Semantic Web where machines exchange information intelligently. The concept makes it possible to express abstract (till now only human-understandable) concepts, structures and relations in machine-understandable form. The key lies in Knowledge representation.
Knowledge representation is a field which currently seems to have the reputation of being initially interesting, but which did not seem to shake the world to the extent that some of its proponents hoped. It made sense but was of limited use on a small scale, but never made it to the large scale. This is exactly the state that the hypertext field was in before the Web. Each field had made certain centralist assumptions—if not in the philosophy, and then in the implementations, which prevented them from spreading globally. But each field was based on fundamentally sound ideas about the representation of knowledge.
The Semantic Web is what we will get if we perform the same globalization process to Knowledge Representation that the Web initially did to Hypertext. We remove the centralized concepts of absolute truth, total knowledge, and total provability, and see what we can do with limited knowledge. Semantic comes from the Greek words for sign, signify, and significant, and today means of or relating to meaning, often in language. The Semantic Web is an extension of the current Web in which information is given well-defined meaning, better enabling computers and people to work in cooperation. It is the idea of having data on the Web defined and linked in a way that it can be used for more effective discovery, automation, integration, and reuse across various applications. The Webcam reach its full potential if it becomes a place where data can be shared and processed by automated tools as well as by people.
Introduction:-
Now the Web has reached critical mass as a medium for human communication, the next phase is to build the “Semantic Web”. The Semantic Web is a Web that includes documents, or portions of documents, describing explicit relationships between things and containing semantic information intended for automated processing by our machines.
Current web
The current Web is entirely aimed at human readers, they are display oriented. Machines are oblivious to the actual information content; web browsers, web servers and even search engines do not really distinguish weather forecasts from scientific papers or phone numbers from zip codes. This inability to process the contents of information by machines seriously hampers the functionality of the current Web. Computers are limited to transmitting and presenting information on the Web, and cannot really help us in processing this information in any intelligent and intelligible way. Under Semantic Web, we can customize a data structure and represent them in a manner which is understandable by all kinds of interpreters be it humans or machines. The more interesting aspect is Automation that can be brought about by this Semantic nature of the web using machines.
[5]Many information-based tasks require a person to assimilate and manipulate multiple pieces of information, e.g., a military commander monitoring troop movement, logistics support and weather feedback from a battlefront to decide on the next course of action, or a doctor viewing a patient’s past visit, prescription and x-ray history with respect to a diagnosis in order to decide determine how to proceed with a particular therapy. We often tackle these tasks using applications. Each application offers users the opportunity to work with a certain pool of information by giving them a set of information views and a collection of operations that can be invoked on the information being viewed. But when users’ information management tasks do not exactly match the ones envisioned by the application developers, they can find themselves fighting with the application, struggling to simultaneously see all the information they need, or to invoke operations that are buried deeply in the feature set of the application. Worse, users often find that the information they need is spread out over multiple applications. In such cases, they find themselves wading through cluttered desktops full of multiple application windows, mentally or manually processing information from each one and ignoring lots of other distracting information.
Today, the World Wide Web is an indispensable resource and the Semantic Web, with metadata annotated information, will be even more vital for completing information-based tasks . On the Semantic Web, agents and other automate dprocesses will produce more information faster and at a finer level of access and semantic granularity that can be shared via web services. Attempting to manage this torrent of information using multiple applications will lead to a proliferation of applications, further partitioning related information thereby exacerbating the current complex, time consuming and error prone nature of many information-based tasks. A more robust solution is needed that can easily adapt to evolving user and task needs by working equally well with multiple, unanticipated types of information fragments as they become available on the Semantic Web.
The Semantic Web hints at a solution to some of these problems. It offers a single unified data model powerful enough to hold all of the information currently scattered among multiple applications and the metadata annotations can be used to select relevant information. But merely unifying the data is insufficient. To use it to solve a particular task, users still need tools that will aggregate the information they need into a meaningful presentation that lets them view and manipulate it as is needed for their task. What are needed are small, flexible and reusable units of content and their associated user interfaces and application logic that can be arbitrarily combined to yield larger, more powerful task interfaces. In this paper, we argue that it is both desirable and possible to let end-users create their own information management applications over the Semantic Web, choosing precisely which information objects they want to work with and how they want to view and manipulate those objects. Such “end-user application development” would let end users create workspaces designed specifically to accomplish their particular information management tasks. Our approach combines
three elements:
1. A task workspace designer that lets users specify the information objects that they want to lay out to work within their application and the views and operations that should be applicable to those information objects. We consider the problem of information presentation as consisting of two parts: specification of high level layout of multiple types of information and of the view of a particular information entity (For example, a person in the accounting department may be interested in the spreadsheet view of some sales figures, whereas a higher level executive would prefer a chart based on this data.) Layout capability is not only important in managing a set of related content within an application, but will also be important for a priori, unrelated content that the user has aggregated and juxtaposed for his/her task. A view should be the unit of user interface interaction that allows exposing relevant properties and direct interaction capabilities of an object to support constraints imposed by task or user preferences.
2. A view designer that functions as a supporting tool and allows users specify how each of the information objects in their workspace should be shown – what properties of those objects they want to see, and how they should be laid out; and
3. A channel manager supporting tool that lets users specify content queries that dynamically maintain collections of information relevant to the task. On the Semantic Web, this issue will have be important as the authoring ontology is not intended to necessarily match all usage scenarios, and hence users must easily be able to extract the relevant portions of the information efficiently.

Rather than specifying views, workspaces, and channels programmatically, users put them together using natural visual operations such as dragging, dropping, and resizing, that they are already familiar with as tools for managing their desktop environments. The workspaces, views, and channels designed by these end-users are themselves represented using RDF in the Semantic Web, creating an opportunity for users to share the views and workspaces that they design with others, and for unsophisticated users to craft their “applications” by tweaking preexisting ones instead of creating them from scratch. We have implemented our system as part of the Haystack information management platform. This tool provides a set of cooperating technologies and tools supporting end-user creation, visualization and manipulation of Semantic Web content, as well as application development for the Semantic Web . Along with a blackboard style RDF store, it hosts agents to provide automated reasoning and supports a user interface framework that provides pervasive context menus, drag and drop capability, and a view architecture that appropriately selects views for entities of different types depending on the context.[5]

Why semantic Web?
On the web today are billions of documents. As their number increases, so does the need to have more sophisticated retrieval systems for them. The goal is to avoid being swamped by useless information and losing data in the dark reaches of cyberspace. This is a classic problem for anyone who has to organize a large database meaningfully. The Semantic Web is not a separate Web but an extension of the current one, in which information is given a well defined meaning, better enabling computers and people to work in cooperation. It is about bringing those data-handling possibilities to the web-level. Leaving aside the artificial intelligence problem of training machines to behave like people, the Semantic Web approach instead develops languages for expressing information in a machine processable form.
For enabling a Semantic Web, the model used is of primary importance. This model has a layered structure as shown in the Figure.

Untitled

 

where each application can build its Semantics on top of the existing layers.[3]

The Schema layer:
The basic model of the RDF allows us to do a lot on the blackboard, but does not give us many tools. It gives us a model of assertions and quotations on which we can map the data in any new format. We next need a schema layer to declare the existence of a new property and to describe it. We want to be able to constrain the way it used. Typically we want to constrain the types of object it can apply to. .
The logical layer:
The next layer, then is the logical layer. We need ways of writing logic into documents to allow such things as, for example, rules; the deduction of one type of document from a document of another type; the checking of a document against a set of rules of self-consistency

The Resource Description Framework:
RDF Defined The Resource Description Framework (RDF), developed under the auspices of the World Wide Web Consortium (W3C), is a language designed to support the Semantic Web, in much the same way that HTML is the language that helped initiate the original Web. RDF is a framework for supporting resource description, or metadata (data about data), for the Web. RDF provides
common structures that can be used for interoperable XML data exchange.
RDF provides a model for describing resources. It is defined formally as follows:

There is a set called Resources. There is a set called Literals. There is a subset of Resources called Properties. There is a set called Statements, each element of which is a triple of the Form {pred, sub, obj} Where pred is a property (member of Properties), sub is a resource (member Of Resources), and obj is either a resource or a literal (member of Literals).

Resources have properties (attributes or characteristics). RDF defines a resource as any object that is uniquely identifiable by a Uniform Resource Identifier (URI). The properties associated with resources are identified by property-types, and property-types have corresponding values. Property-types express the relationships of values associated with resources. In RDF, values may be atomic in nature (text strings, numbers, etc.) or other resources, which in turn may have their own properties. A collection of these properties that refers to the same resource is called a description. At the core of RDF is a syntax-independent model for representing resources and their corresponding descriptions. Any resource can be described with RDF as long as the resource is identifiable with a URI as shown in Figure.

The RDF Syntax:
RDF defines a simple, yet powerful model for describing resources. A syntax representing this model is required to store instances of this model into machine-readable files and to communicate these instances among applications. This specification of RDF uses the Extensible Markup Language [XML] encoding as its interchange syntax. RDF also requires the XML namespace facility to precisely associate each property with the schema that defines the property.

Schemas and Namespaces:
Meaning in RDF is expressed through reference to a schema. You can think of a schema as a kind of dictionary. RDF Schemas are used to declare vocabularies, the sets of semantics property-types defined by a particular community. RDF schemas define the valid properties in a given RDF description, as well as any characteristics or restrictions of the property-type values themselves. schema is the place where definitions and restrictions of usage for properties are documented. In order to avoid confusion between independent — and possibly conflicting — definitions of the same term, RDF uses the XML namespace facility. Namespaces are simply a way to tie a specific use of a word in context to the dictionary (schema) where the intended definition is to be found. The table below shows a hypothetical RDF/XML snippet, and the generated triples in the data model.

<contact rdf:about=”VD’Souza”>
<name>Vijay Mukhi</name>
<role>Internet Expert</role>
<organization>Computer Society of India</organization>
</contact>

 

This is where RDF schemas come in — they define an RDF vocabulary that can be used to express the “Contact” class. This allows all users of a resource of type “Contact” to have an agreed expectation of its properties and relationship to other resource types.
Ontologies:
A key technology of the Semantic Web that makes possible to establish semantic interoperability is Ontologies. An ontology is a formal and declarative representation of some subject area, or an explicit specification of some topic. It is a specification of concepts to be used for expressing knowledge.

Taxonomy:
The taxonomy defines classes of objects and relations among them. For example, an address may be defined as a type of location, and city codes may be defined to apply only to locations, and so on. Classes, subclasses and relations among entities are a very powerful tool for Web use. We can express a large number of relations among entities by assigning properties to classes and allowing subclasses to inherit such properties. If city codes must be of type city and cities generally have Web sites, we can discuss the Web site associated with a city code even if no database links a city code directly to a Web site.

Inference rules :
Inference rules in Ontologies supply further power. Ontology may express the rule “If a city code is associated with a state code, and an address uses that city code, then that address has the associated state code.” A program could then readily deduce, for instance, that IIT Bombay address, being in Mumbai, must be in Maharashtra State, which is in India, and therefore should be formatted to Indian standards. The computer doesn’t truly “understand” any of this information, but it can now manipulate the terms much more effectively in ways that are useful and meaningful to the human user. With Ontology pages on the Web, solutions to terminology (and other) problems begin to emerge. Problems arise if Mr. X points to an ontology that defines addresses as containing a zip code and Mr. Y points to one that uses postal code. This kind of confusion can be resolved if Ontologies (or other Web services) provide equivalence relations: one or both of the Ontologies may contain the information that Mr. X’s zip code is equivalent to Mr. Y’s postal code.
SECURITY IN SEMANTIC WEB

We first provide an overview of security issues for the semantic web and then discuss some details on XML security, RDF security and secure information integration, which are components of the secure semantic web. As more progress is made on investigating these various issues, we hope that appropriate standards would be developed for securing the semantic web. As stated earlier, logic, proof and trust are at the highest layers of the semantic web. That is, how can we trust the information that the web gives us? Closely related to trust is security. However security cannot be considered in isolation. That is, there is no one layer that should focus on security. Security cuts across all layers and this is a challenge. That is, we need security for each of the layers and we must also ensure secure interoperability as illustrated in Table I.

Layer 5              Logic, Proof, Trust
Layer 4             Secure Ontologies
Layer 3             RDF Security
Layer 2             XML Security (Secure XML Schemas)
Layer 1             Secure TCP/IP, HTTPS, Secure Sockets

TABLE I. SECURITY LAYERS FOR THE SEMANTIC WEB

For example, consider the lowest layer. One needs secure TCP/IP, secure sockets, and secure HTTP. There are now security protocols for these various lower layer protocols. One needs end-to-end security. That is, one cannot just have secure TCP/IP built on untrusted communication layers . That is, we need network security. Next layer is XML and XML schemas. One needs secure XML. That is, access must be controlled to various portions of the document for reading, browsing and modifications. There is research on securing XML and XML schemas. The next step is securing RDF. Now with RDF not only do we need secure XML, we also need security for the interpretations and semantics. For example under certain context, portions of the document may be Unclassified while under certain other context the document may be Classified. As an example one could declassify an RDF document, once the war is over. Lot of work has been carried out on security constraints processing for relational databases. One needs to determine whether these results could be applied for the semantic web .
Once XML and RDF have been secured the next step is to examine security for ontologies and interoperation. That is, ontologies may have security levels attached to them. Certain parts of the ontologies could be Secret while certain other parts may be Unclassified. The challenge is how does one use these ontologies for secure information integration? Researchers have done some work on the secure interoperability of databases. We need to revisit this research and then determine what else needs to be done so that the information on the web can be managed, integrated and exchanged securely. Closely related to security is privacy. That is, certain portions of the document may be private while certain other portions may be public or semi-private. Privacy has received a lot of attention recently partly due to national security concerns. Privacy for the semantic web may be a critical issue, That is, how does one take advantage of the semantic web and still maintain privacy and sometimes anonymity. Note that W3C is actively examining privacy issues and a good starting point is P3P (Platform for Privacy Preferences) standards, P3P 1.0 Specification .
We also need to examine the inference problem for the semantic web. Inference is the process of posing queries and deducing new information. It becomes a problem when the deduced information is something the user is unauthorized to know. With the semantic web, and especially with data mining tools, one can make all kinds of inferences.

Security in XML:
Various research efforts have been reported on XML security . We briefly discuss some of the key points. XML documents have graph structures. The main challenge is whether to give access to entire XML documents or parts of the documents. Bertino et al. have developed authorization models for XML. They have focused on access control policies as well as on dissemination policies. They also considered push and pull architectures. They specified the policies in XML. The policy specification contains information about which users can access which portions of the documents. As in reference is stated algorithms for access control as well as computing views of the results are also presented. In addition, architectures for securing XML documents are also discussed. Bertino et al. go further and describe how XML documents may be published on the web. The idea is for owners to publish documents, subjects to request access to the documents and untrusted publishers to give the subjects the views of the documents they are authorized to see. W3C (World Wide Web Consortium) is also specifying standards for XML security. The XML security project is focusing on providing the implementation of security standards for XML. The focus is on XML-Signature Syntax and Processing, XML-Encryption Syntax and Processing and XML Key Management. W3C also has a number of working groups including XML-Signature working group and XML-Encryption working group. While the standards are focusing on what can be implemented in the near term lot of research is needed on securing XML documents.
Security in RDF :
RDF is the foundations of the semantic web. While XML is limited in providing machine understandable documents, RDF handles this limitation. As a result, RDF provides better support for interoperability as well as searching and cataloging. It also describes contents of documents as well as relationships between various entities in the document. While XML provides syntax and notations, RDF supplements this by providing semantic information in a standardized way.
The basic RDF model has three types: they are resources, properties and statements. Resource is anything described by RDF expressions. It could be a web page or a collection of pages. Property is a specific attribute used to describe a resource. RDF statements are resources together with a named property plus the value of the property. Statement components are subject, predicate and object. There are RDF diagrams very much like say ER-diagrams or object diagrams to represent statements. There are various aspects specific to RDF syntax and for more details we refer to the various documents on RDF published by W3C. Also, it is very important that the intended interpretation be used for RDF sentences. This is accomplished by RDF schemas. Schema is sort of a dictionary and has interpretations of various terms used in sentences. RDF- and XML-namespaces resolve conflicts in semantics. More advanced concepts in RDF include the container model and statements about statements. The container model has three types of container objects and they are Bag, Sequence, and Alternative. A bag is an unordered list of resources or literals. It is used to mean that a property has multiple values but the order is not important. A sequence is a list of ordered resources. Here, the order is important. Alternative is a list of resources that represent alternatives for the value of a property. Various tutorials in RDF describe the syntax of containers in more detail. RDF also provides support for making statements about other statements. Again one can use object-like diagrams to represent containers and statements about statements. RDF also has a formal model associated with it. This formal model has a formal grammar. As in the case of any language or model, RDF will continue to evolve. Now to make the semantic web secure, we need to ensure that RDF documents are secure. This would involve securing XML from a syntactic point of view. However with RDF we also need to ensure that security is preserved at the semantic level. The issues include the security implications of the concepts resource, properties and statements. There are many difficult questions and we need to start research to provide answers. XML security is just the beginning. Securing RDF is much more challenging.

Standardization of semantic web security ;
Web resources and services need to be protected from unauthorized access and software agents want to be ensured about the privacy of data they disclose to services. Thus, a broad range of security-related notions, such as authentication, authorization, access control, confidentiality, data integrity, and privacy are relevant for semantic web technology. Currently, low-level encryption, digital signature mechanisms, certification, and public key infrastructures provide a good security infrastructure for web-based interactions. However, providing higher-level security, especially without prior trust relations in dynamic interactions, relies on a variety of ad hoc mechanisms. This heterogeneity of mechanisms leaves security holes with deleterious effects. The proposed industrial standards on security assume a well-established web of trust among business-to-business (B2B) partners. For example, there exists a significant body of standardization efforts for security of XML-based web services, such as WS-Security [4], -Trust [6], and -Policy [8] at W3C, or SAML of the OASIS Security Services Technical Committee, and the Security Specifications of the Liberty Alliance Project. WS-Security provides a layer of security over SOAP, which is an XML-based protocol for exchanging information primarily used for web services. WS-Security describes how to attach signature and encryption headers or security tokens to SOAP messages.
The standards support low-level security or policy markups that concern formats of credentials or supported character sets for encoding. They do not address semantic user- or application-specific trust tokens and their relations, nor do they allow for expressive policies. The standards deliver to the needs of B2B applications where trusted partners and business relationships have already been established in advance of operation and transactions. However, in a world where more and more public and private services are becoming available online and the vision of cyber-societies is becoming reality, assumptions about pre-established trust relationships do not hold true. The standards are not extensible to more dynamic environments in which simple authentication is not enough, but authentication on user-defined attributes needs to be considered as „foreign“ or unknown entities will interoperate with each other across heterogeneous domains and applications using delegation mechanisms.
Applications for Web Automation:
Automated searches:
Ontologies can enhance the functioning of the Web in many ways. They can be used in a simple fashion to improve the accuracy of Web searches—the search program can look for only those pages that refer to a precise concept instead of all the ones using ambiguous keywords.
Real World Automation:
In the next step, the Semantic Web will break out of the virtual real and extend into our physical world. URIs can point to anything, including physical entities, which means we can use the RDF language to describe devices such as cell phones and TVs. Such devices can advertise their functionality—what they can do and how they are controlled—much like software agents.
Automated Web Services:
Automatic Web service discovery involves automatically locating Web services that pro-vide a particular service and that adhere to requested properties. A user might say, for example,“Find a service that sells airline tickets between San Francisco and Mumbai and that accepts payment by Diner’s Club credit card.” Currently, a human must perform this task, first using a search engine to find a service and then either reading the Web page associated with that service or executing the service to see whether it adheres to the requested properties. With semantic markup of services, we can specify the information necessary for Web service discovery as computer-interpretable semantic markup at the service Web sites, and a service registry or (ontology-enhanced) search engine can automatically locate appropriate services.

Integration via Meaning:
In a briefing that has subsequently become well-known Keith Jeffery proposed a three layer architecture for considering a future distributed information systems architecture (Jeffery 2000).
These three general levels were:
Computation/Data layer: the basic protocols for accessing, invoking and scheduling the use of computation and data resources. This includes the fundamental networking protocols, the addressing of resources, and the low-level data formats and remote method invocation to access the raw power. However, to use this layer effectively, the user has to know in advance the location, the data formats and the functionality of resources available.
• Information Layer: information on resources available on the distributed system is available via descriptions of those resources (commonly known as Metadata). This will allow the discovery and negotiation of resources within known domains of practice: that is the meaning of the metadata is agreed within a particular community.
• Knowledge Layer: access to resources is negotiated through the semantics of metadata encapsulated within the system. This layer will provide contextualised access to information, utilising semantic knowledge and reasoning. Processes within this layer include reclassifying information against new ontologies, to enable interoperability between different semantics, and knowledge discovery in databases (including data mining).Whilst originally promoted within the domain of distributed Grid systems supporting an infrastructure for scientific applications, this analysis of distributed systems applies equally well to the Web. Indeed, the distinction is likely to be nonsensical as the whole world converges on a single architecture for wide-area distributed systems.
Within the Web, the basic existing infrastructure (for addressing Data) augmented with the basic Web Services architecture (for accessing computational resources) forms the data and computation layer, with the familiar tool of the Web Server as the defining tool; one which dumbly responds to requests for resources, possibly passing control to other systems in a peer-topeer manner. Existing community efforts to standardise on particular XML Schemas for both data and Web Services, particularly some infrastructure efforts including those mentioned above (P3P, PICS, CC/PP, RSS) – which can be seen as augmenting the basic Web Service architecture, distinguish
the information layer of the Web. In this layer the concept of the Portal is the defining tool. This a tool that uses metadata defined within a known domain to access resources which, although they are unknown to the user, have a previously agreed semantics and can be used by the user. Thus the intelligence still resides with the human user.
The Knowledge layer within the Web will be supported by the Semantic Web. We shall discuss in more detail what this entails. Jeffery identifies control as a connecting feature between the layers: knowledge about the relationships between resources controls access to information about resource, which in turn controls access to the resources themselves. We could equally well say that another distinguishing feature of the layers is the decrease in the necessity for prior knowledge as we go up the layers, and the increase in delegation, as more functions are delegated from the user to the system,
ultimately to a system of intelligent agents. Thus in the data/computation layer, users exercise direct control upon resources which they know about in advance. In the information layer, users delegate some tasks, such as resource discovery and access, to portals, and may not have knowledge of the location of resources, but they have to have prior knowledge of the portal and the nature of the information that that portal processes. In the Knowledge layer, delegation should increase, prior knowledge decrease, so the user will be able to delegate the task of discovering appropriate information sources, and also not need to know the meaning or existence of that information in advance. The user should be able to specify the task that he or she desires to perform and then delegate it to the system. Thus, the defining software component for the knowledge layer is the Intelligent Agent.
Three types of agent will typically be present within this system:

• User Proxy agent: An agent acting on the users behalf. It will initiates and coordinates user actions and queries to the web, seek out and offer to the user relevant resources, acts as user proxy to react when user is absent, and automatically responds to requests on the user according to user preferences and security settings.
• Resource agent: An agent acting on the behalf of a resource. It will respond to requests for access to resources, coordinate queries with other resources, and control and monitor access to the resource.
• Broker agent: These agents are not connected to any resource, but provide a discovery and negotiation service for other agents, searching for appropriate resources, negotiating access and monitoring usage.

Agents will thus negotiate with each other on a basis of attempting to determine the meaning of resources – thus the main integration mechanism of a Semantic Web enabled Web Service architecture will be meaning itself. We will have Integration via Meaning. Meanwhile, the portals and other tools which are at the lower levels will disappear into the infrastructure of the Webitself; the user will need no prior knowledge of the portals or the semantics they support; the interaction with the portal will be mediated by the user’s agent which will attempt to resolve the semantics provided by the portal with the semantics of the user.

Particular functions of the Semantic Web

Particular functions will be provided via this Semantic Web layer on top of Web Services, as processed via the actor’s agents.
• Resource organisation, searching and discovery. By expressing the semantics of
resources in terms of Ontologies, together with interoperability and reasoning, agents can discover relevant resources, and present them back to the user in the user’s own terms. This would include access to web service descriptions; currently these are expressed in terms of either interface descriptions, or domain specific criteria. In the Semantic Web, web service descriptions would include a formal expression of the functionality of the service; a proof could determine whether the service satisfies the user’s requirements.
• Brokering and negotiation. Once suitable resources have been identifies, intermediary brokers can reconcile the requirements of users and resources, in terms of their meanings, to determine whether a suitable deal or contract can be established for usage of the resource.
• Trust management. Establishing trust between agents that have no prior knowledge of each other is a major problem within the Web, and one which could potentially prevent the establishment of a universal Web Service infrastructure. Agents need to be able to negotiate access to resources, through a process of negotiation. Again Semantic Web techniques can enable this. Services could publish policies, user agent could present their credentials, possibly with reference to a trusted third party, and once trust has been established, negotiate suitable access rights and obligations (Dimitrakos 2002).
• Quality of service. Similarly, users and resources may have conditions with respect to the quality of service they require (e.g. in terms of response time, accuracy of data, level of confidentiality). These properties of resources can be expressed in Semantic Web terms and negotiated via agents.
• Auditing and monitoring. Monitoring agents can track the usage of the web, and provide audit trails. Included in this mechanism would be functionality to track expenditure and perform billing. Personalisation. User agent will be able to represent and enforce the preferences of the user. This would include how the user would prefer their own information to be used, and also what requirements they might have on the information they would like to access and how it is presented back to them. This infrastructure is already emerging with CC/PP, P3P, and PICS. Through negotiation with RSS, and other specifications such as XHTML modules, a negotiation of the form of information that a user requires can be performed.
The Semantic Web – Why it’s important for business?

Consider the following two questions,

1. Find Indian restaurants in Blackpool
2. What are my prospects are doing about renewable energy in the North West
What is the difference between these two questions?
Both are specific, both are answerable, both answers are valuable albeit in different ways however any well informed information consumer could reasonably expect any internet search engine (Yahoo, Google, Microsoft etc.) to answer one of them quickly and easily, however the other would be difficult to the point of infeasible, why? If you analyse the assumptions underpinning these two questions you begin to understand why semantics are important in the field of search and information gathering. In the first example there are two important terms, these are “Indian restaurant” and “Blackpool” the important thing about them is that they are relatively unambiguous and their meaning would be fairly universal among any given set of people. However this doesn’t mean that these terms are completely unambiguous, for example there may well be several places around the world called “Blackpool” and the term “Indian” as it relates to food represents a very diverse set of possibilities, for example curry, tandoori, kebabs, Bengali, Bangladeshi and Pakistani etc. The reason why this first question is relatively easy to answer has more to do with the fact that there is better consensus about what the question means among a large number of humans, than the physical mechanism that a computer would use to find and group content on the internet in order to display it.

when the question is asked.
In the second example we run into difficulty almost immediately, i.e. what does the term “prospects” mean? Clearly, given any particular company or sales & marketing team this term is perfectly unambiguous, however to anyone outside of that circle of understanding (i.e. the internet search engines) it is meaningless. Next we have the term “renewable energy”; this is certainly more specific and more broadly identifiable than “prospects” but never the less it would have a wide meaning and the list of things it could successfully represent would be different depending on who is asked.
Lastly we have a classically ambiguous term in “North West”, utterly meaningless unless you also know which country, counties, towns and post codes or perhaps it is simply a manufactured entity to suit a particular business or company, their market and how they sell to that market.

BENEFITS OF THE SEMANTIC WEB
• Information is captured in a language agnostic format.
• A central repository for knowledge is created.
• More precise, relevant information is captured.
• Processes and procedures are mapped to data sources.
• One collective view of knowledge across enterprise applications is created.
Conclusion & Future:
We predict that in the next few years virtually every company, university, government agency or ad hoc interest group will want their web resources linked to ontological content –because
of the many powerful tools that will be available for using it. Information will be exchanged between applications, allowing computer programs to collect and process web content, and to exchange information freely with each other. On top of this infrastructure, agent-based computing will become much more practical, in fact distributed computer programs interacting with non-local web-based resources may eventually become the dominant way in which computers interact with humans and each other. Web users will not markup their web pages unless they perceive a value to doing so, and tools to demonstrate this value will not be developed unless web resources are marked up. To help in solving this chicken and egg problem, some government organizations are now funding a set of researchers both to develop freely available tools, and to provide significant content for these tools to manipulate – thus demonstrating to the society that the semantic web can be a reality, not just a vision. Perhaps some of the ideas expressed in this paper will inspire the creation of some exciting web-agent applications. Effective acquisition, organization, processing, sharing, and use of the knowledge embedded in (textual and multimedia) web content as well as in information- and knowledge-based work processes plays a major role for competitiveness in the modern information society and for the emerging knowledge economy. However, this wealth of knowledge implicitly conveyed in the vast amount of available digital content is nowadays only accessible provided that considerable manual effort has been invested into its interpretation and semantic annotation, which is possible only for a small fraction of the available content. Therefore the major part of the implicit semantic knowledge is not taken into account by state-of-the-art information access technologies like search engines, which restrict their indexing activities to superficial levels, mostly the keyword level.
Multilinguality and multicultural expression are important aspects of human society. Texts and documents are – and will be – written in various native languages, but these documents are relevant even to non-native speakers. We could imagine bypassing the multilingual problem by focusing directly onto knowledge itself, rather than on language, but in fact, human knowledge is and will be encoded in language, and multilingual and multicultural aspects (culture as specific to countries, regions and nations, connected with language) will play an important role in establishing and maintaining such common understanding. The Semantic Web must represent and structure concepts
in multilingual and multicultural ontologies, which can be obtained only by linking conceptual nodes with the various language specific lexical realizations. Given these considerations, we have been presenting on going initiatives and projects in the HLT domain that take the (multilingual and multicultural) Semantic Web as a challenge for the own field of ressearch and development and that address the changes to be achieved within the own community, not only at the level of resources and tools but also at the level of the language infrastructure in general. Those are encouraging steps towards a global research and development effort on establishing a distributed, standardized and semantically interoperable infrastructure of language resources and tools, which would enable a widespread integration of multilingual analysis tools intomsemantic web services and applications.

Reference:
[1] Christoph Bussler,“Is Semantic web Tecnology Taking The Wrong Turn”, Jan/Feb
2008
[2]www.scholar.google.com,“Ineractive Media And Artificial Intelligecnce”,Dec 2009,
vol 1,Number2.
[3]www.scholar.google.com, “ Integration Via Meaning:Using The sematic Web”
Brian M Matthuews
[4]www. scholar.google.com , “Making Secure Semantic Web”,
Universal Journal of Computer Science and Engineering Technology
, Nov. 2010.
[5]www.scholar.google.com, Karun Bakshi and David R. Karger “ Semantic Web
Applications”
[6]www.springerlink.com
[7]www.xmlschama.com
[8]www.w3c.com
[9]www.w3.com

Click here to DOWNLOAD PPT

Leave a Reply

Your email address will not be published. Required fields are marked *