Download miễn phí Khóa luận Some studies on a probabilistic framework for finding Object-Oriented information in unstructured data
TABLE OF CONTENTS
Introduction . 1
Chapter 1. Object Search . 3
1.1 Web-page Search . 3
1.1.1 Problem definitions . 3
1.1.2 Architecture of search engine. 4
1.1.3 Disadvantages . 6
1.2 Object-level search . 6
1.2.1 Two motivating scenarios . 6
1.2.2 Challenges . 8
1.3 Main contribution . 8
1.4 Chapter summary . 9
Chapter 2. Current state of the previous work . 10
2.1 Information Extraction Systems . 10
2.1.1 System architecture . 10
2.1.2 Disadvantages . 11
2.2 Text Information Retrieval Systems . 12
2.2.1 Methodology . 12
2.2.2 Disadvantages . 12
2.3 A probabilistic framework for finding object-oriented information in
unstructured data. 13
2.3.1 Problem definitions . 13
2.3.2 The probabilistic framework . 14
2.3.3 Object search architecture . 17
2.4 Chapter summary . 19
Chapter 3. Feature-based snippet generation . 21
3.1 Problem statement . 21
3.2 Previous work . 22
3.3 Feature-based snippet generation .23
3.4 Chapter summary . 25
Chapter 4. Adapting object search to Vietnamese real estate domain . 26
4.1 An overview . 26
4.2 A special domain - real estate . 27
4.3 Adapting probabilistic framework to Vietnamese realestate domain . 29
4.3.1 Real estate domain features . 29
4.3.2 Learning with Logistic Regression . 31
4.4 Chapter summary . 31
Chapter 5. Experiment . 32
5.1 Resources . 32
5.1.1 Experimental Data . 32
5.1.2 Experimental Tools .33
5.1.3 Prototype System . 33
5.2 Results and evaluation . 33
5.3 Discussion . 36
5.4 Chapter summary . 37
Chapter 6. Conclusions . 38
6.1 Achievements and Remaining Issues . 38
6.2 Future Work . 38
http://cloud.liketly.com/flash/edoc/jh2i1fkjb33wa7b577g9lou48iyvfkz6-swf-2014-01-07-khoa_luan_some_studies_on_a_probabilistic_framewor.g5QcN5CjG2.swf /tai-lieu/de-tai-ung-dung-tren-liketly-53926/
Để tải bản Đầy Đủ của tài liệu, xin Trả lời bài viết này, Mods sẽ gửi Link download cho bạn sớm nhất qua hòm tin nhắn.
Ai cần download tài liệu gì mà không tìm thấy ở đây, thì đăng yêu cầu down tại đây nhé:
Nhận download tài liệu miễn phí
Tóm tắt nội dung tài liệu:
ize our maincontribution through out this thesis.
10
Chapter 2. Current state of the previous work
We have introduced about the object search problem which have been interested
in by many scientists. In this chapter, we discuss plausible solutions, which have been
proposed recently with focus on the novel machine learning framework to solve the
problem.
2.1 Information Extraction Systems
One of the first solutions in object search problem is based on Information
Extraction System. After fetching web data related to the targeted objects within a
specific vertical domain, a specific entity extractor is built to extract objects from web
data. At the same time, information about the same object is aggregated from multiple
different data resources. Once object are extracted and aggregated, they are put into
the object warehouses and vertical search engines can be constructed based-on the
object-warehouses [26][27]. Two famous search engines have built related to this
approach: Scientific search engine - Libra ( Product search engine
- Window Live Product Search ( In Vietnam, Cazoodle
company, which professor Kevin Chuan Chang has supported, is also developing
under the approach (
2.1.1 System architecture
2.1.1.1 Object-level Information Extraction
The task of an object extractor is to extract metadata about a given type of
objects from every web page containing this type of objects. For example, for each
crawled product page, the system extracts name, image, price and description of each
product.
However, how to extract object information from web pages generated by many
different templates is non-trivial. One possible solution is that we first distinguish web
pages generated by different templates, and then build an extractor for each template
(template-dependent). Yet, this one is not realizable. Therefore, Zaiqing Nie has
proposed template-independent metadata extraction techniques [26][27] for the same
type of objects by extending the linear-chain Conditional Random Fields (CRFs).
2.1.1.2 Object Aggregator
Each extracted web object need to be mapped to a real world object and stored
into a web data warehouse. Hence, the object aggregator needs to integrate information
about the same object and disambiguate different objects.
11
Figure 6. System architecture of Object Search based on IE
2.1.1.3 Object retrieval
After information extraction and integration, the system should provide retrieval
mechanism to satisfy user’s information needs. Basically, the retrieval should be
conducted at the object level, which means that the extracted objects should be
indexed and ranked against user queries.
To be more efficient in returning result, the system should have a more powerful
ranking model than current technologies. Zaiqing Nie has proposed the PopRank
model [28], a method to measure the popularity of web objects in an object graph.
2.1.2 Disadvantages
As discussed above, one of obvious advantages is that once object information is
extracted and stored in warehouse, it can be retrieved effectively by a SQL query or
some new technologies.
However, to extract object from web pages, it is usually labor intensive and
expensive techniques (e.g: HTML rendering). Therefore, it is not only difficult to scale
to the size of the web, but also not adaptable because of different formats. Moreover,
Crawler
Classifier
Paper Extractor Author Extractor Product Extractor
Paper Aggregator Author Aggregator Product Aggregator
Scientific Web
Object Warehouse
Product Web
Object Warehouse
Pop rank Object Relevance Object Categorization
12
whenever new websites are presented in totally new format, it is impossible to extract
objects without writing new IE module.
2.2 Text Information Retrieval Systems
2.2.1 Methodology
Another method for solving object search problem is that we can adapt existing
text search engines like Google, Yahoo, Live Search. Almost of current search engines
provide for users a function called advanced search which let them find out
information that they need more exactly.
We can customize search engine in many ways for targeting domain. For
example, one can restrict the list of returned sites such as “.edu” sites to search for
professor homepages. Another way is to add some keywords, such as “real estate,
price” to original queries to “bias” the search result toward real estate search.
Figure 7. Examples of customizing Google Search engine
2.2.2 Disadvantages
The advantage of using this approach is scalability because indexing text is very
fast. In addition, text can be retrieved using inverted indices efficiently. Therefore, text
retrieval systems scale well with the size of the web.
However, these approaches are not adaptable. In the above examples, the
restriction sites or “bias” keywords must be input manually. Each domain has own its
“bias” keywords and in many cases, such customizations are not enough to target to
the domain. Therefore, it is hard to adapt to the new domain or changes on the web.
13
2.3 A probabilistic framework for finding object-oriented information in
unstructured data
Two above solutions can be plausible for solving object search problem. Yet, the
Information Extraction based solution has low scalability and low adaptability while
Text Information Retrieval based solution has high scalability but low adaptability. As
a result, another approach has been proposed called probabilistic framework for
finding object-oriented information in unstructured data which is presented in [13].
2.3.1 Problem definitions
Definition 1: An object is defined by 3 tuples of length n, where n is the number
of attributes, N, V, T. N = (α1, α2.. αn) are the names of attributes. V = (β1, β2.. βn) are
the attribute values. T = (µ1, µ2.. µn) are the types that each attribute value can take in
which µ i often is of {number, text}.
Example 1: “An apartment in Hanoi with used area 100m2, 2 bedrooms, 2
bathrooms, East direction, 500 million VND” is defined as N = (location, types, area,
bedrooms, bathrooms, direction, price) and V = (‘Hanoi’, ‘apartment’, 100, 2, 2, ‘East’,
500) and T = (text, text, number, number, number, text, number).
Definition 2: An object query is defined by a conjunction of n attribute
constraint Q = (c1 ^ c2 ^ … ^ cn). Some constraints would be constant 1 when the user
does not care about the attributes. Each constraint depends on the type of attribute the
object has. A numeric attribute can have a range constraint and a text attribute can be
either a term or a phrase.
Example 2: An object query for “an apartment in Cau Giay at least 100 m2 and
at most 1 billion VND” is defined as Q = (loca=Cau giay ^ type=apartment ^ price<=
1 billion VND ^ 1 ^ 1 ^ areas>100 ^ 1). The query means the user does not care about
“bedrooms”, “bathrooms”, “direction”.
Another way of looking at our object search problem from the traditional
database perspective is to support the select query for objects on the web.
Table 2. Object search problem definition
Given: Index of the web W, An object Domain Dn
Input: Object query (Q = c1 ^ c2 ^ … ^ cn)
Output: Ranked list of pages in W
14
To sum up, we imagine object search problem as advanced retrieval database.
SELECT web_pages
FROM the_web
WHERE Q = c1 ^ c2 ^ … ^ cn is true
ORDER BY probability_of_relevance
2.3.2 The probabilistic framework
• Object Ranking
Instead of extracting object from web pages, the system returns a ranked list of
web pages that contain object users are looking for. In this framework, ranking is
based on the probability of relevance of a given object query and a document
P(relevant | object_query, document). Assuming that object query is a conjunction of
several constraints for each attributes of object and these constraints are independent,
the probability of the whole query can be computed from the probability of individual
constraint.
P (q) = P (c1 ^ c2 ^ … ^ cn)
= P (c1) P (c2)…P (cn) (1)
To calculate the individual probability P(ci), the approach uses machine learning
to estimate it with Pml(s|xi) where xi=xi1,xi2…xik is the relevance features between
constraint ci and the document.
P (ci) = P (ci | correct) x P (correct) + P (ci | incorrect) x P (incorrect).
= Pml (s | xi) x (1-ε) + 0.5 * ε. (2)
ε is an error of machine learning algorithm. If machine learning is wrong, the
best guess for P(ci) is 0.5.
• Learning with logistic regression
The next task of the framework is how to calculate Pml(s|xi) by machine learning.
To do this, the approach uses Logistic Regression [21] because it not ...