Monday 30 September 2013

Web Scraper Shortcode WordPress Plugin Review

This short post is on the WP-plugin called Web Scraper Shortcode, that enables one to retrieve a portion of a web page or a whole page and insert it directly into a post. This plugin might be used for getting fresh data or images from web pages for your WordPress driven page without even visiting it. More scraping plugins and sowtware you can find in here.

To install it in WordPress go to Plugins -> Add New.
Usage

The plugin scrapes the page content and applies parameters to this scraped page if specified. To use the plugin just insert the

[web-scraper ]

shortcode into the HTML view of the WordPress page where you want to display the excerpts of a page or the whole page. The parameters are as follows:

    url (self explanatory)
    element – the dom navigation element notation, similar to XPath.
    limit – the maximum number of elements to be scraped and inserted if the element notation points to several of them (like elements of the same class).

The use of the plugin is of the dom (Data Object Model) notation, where consecutive dom nodes are stated like node1.node2; for example: element = ‘div.img’. The specific element scrape goes thru ‘#notation’. Example: if you want to scrape several ‘div’ elements of the class ‘red’ (<div class=’red’>…<div>), you need to specify the element attribute this way: element = ‘div#red’.
How to find DOM notation?

But for inexperienced users, how is it possible to find the dom notation of the desired element(s) from the web page? Web Developer Tools are a handy means for this. I would refer you to this paragraph on how to invoke Web Developer Tools in the browser (Google Chrome) and select a single page element to inspect it. As you select it with the ‘loupe’ tool, on the bottom line you’ll see the blue box with the element’s dom notation:


The plugin content

As one who works with web scraping, I was curious about  the means that the plugin uses for scraping. As I looked at the plugin code, it turned out that the plugin acquires a web page through ‘simple_html_dom‘ class:

    require_once(‘simple_html_dom.php’);
    $html = file_get_html($url);
    then the code performs iterations over the designated elements with the set limit

Pitfalls

    Be careful if you put two or more [web-scraper] shortcodes on your website, since downloading other pages will drastically slow the page load speed. Even if you want only a small element, the PHP engine first loads the whole page and then iterates over its elements.
    You need to remember that many pictures on the web are indicated by shortened URLs. So when such an image gets extracted it might be visible to you in this way: , since the URL is shortened and the plugin does not take note of  its base URL.
    The error “Fatal error: Call to a member function find() on a non-object …” will occur if you put this shortcode in a text-overloaded post.

Summary

I’d recommend using this plugin for short posts to be added with other posts’ elements. The use of this plugin is limited though.



Source: http://extract-web-data.com/web-scraper-shortcode-wordpress-plugin-review/

Friday 27 September 2013

Visual Web Ripper: Using External Input Data Sources

Sometimes it is necessary to use external data sources to provide parameters for the scraping process. For example, you have a database with a bunch of ASINs and you need to scrape all product information for each one of them. As far as Visual Web Ripper is concerned, an input data source can be used to provide a list of input values to a data extraction project. A data extraction project will be run once for each row of input values.

An input data source is normally used in one of these scenarios:

    To provide a list of input values for a web form
    To provide a list of start URLs
    To provide input values for Fixed Value elements
    To provide input values for scripts

Visual Web Ripper supports the following input data sources:

    SQL Server Database
    MySQL Database
    OleDB Database
    CSV File
    Script (A script can be used to provide data from almost any data source)

To see it in action you can download a sample project that uses an input CSV file with Amazon ASIN codes to generate Amazon start URLs and extract some product data. Place both the project file and the input CSV file in the default Visual Web Ripper project folder (My Documents\Visual Web Ripper\Projects).

For further information please look at the manual topic, explaining how to use an input data source to generate start URLs.


Source: http://extract-web-data.com/visual-web-ripper-using-external-input-data-sources/

Scraping Amazon.com with Screen Scraper

Let’s look how to use Screen Scraper for scraping Amazon products having a list of asins in external database.

Screen Scraper is designed to be interoperable with all sorts of databases and web-languages. There is even a data-manager that allows one to make a connection to a database (MySQL, Amazon RDS, MS SQL, MariaDB, PostgreSQL, etc), and then the scripting in screen-scraper is agnostic to the type of database.

Let’s go through a sample scrape project you can see it at work. I don’t know how well you know Screen Scraper, but I assume you have it installed, and a MySQL database you can use. You need to:

    Make sure screen-scraper is not running as workbench or server
    Put the Amazon (Scraping Session).sss file in the “screen-scraper enterprise edition/import” directory.
    Put the mysql-connector-java-5.1.22-bin.jar file in the “screen-scraper enterprise edition/lib/ext” directory.
    Create a MySQL database for the scrape to use, and import the amazon.sql file.
    Put the amazon.db.config file in the “screen-scraper enterprise edition/input” directory and edit it to contain proper settings to connect to your database.
    Start the screen scraper workbench

Since this is a very simple scrape, you just want to run it in the workbench (most of the time you want to run scrapes in server mode). Start the workbench, and you will see the Amazon scrape in there, and you can just click the “play” button.

Note that a breakpoint comes up for each item. It would be easy to save the scraped details to a database table or file if you want. Also see in the database the “id_status” changes as each item is scraped.

When the scrape is run, it looks in the database for products marked “not scraped”, so when you want to re-run the scrapes, you need to:

UPDATE asin
SET `id_status` = 0

Have a nice scraping! ))

P.S. We thank Jason Bellows from Ekiwi, LLC for such a great tutorial.


Source: http://extract-web-data.com/scraping-amazon-com-with-screen-scraper/

Thursday 26 September 2013

Using External Input Data in Off-the-shelf Web Scrapers

There is a question I’ve wanted to shed some light upon for a long time already: “What if I need to scrape several URL’s based on data in some external database?“.

For example, recently one of our visitors asked a very good question (thanks, Ed):

    “I have a large list of amazon.com asin. I would like to scrape 10 or so fields for each asin. Is there any web scraping software available that can read each asin from a database and form the destination url to be scraped like http://www.amazon.com/gp/product/{asin} and scrape the data?”

This question impelled me to investigate this matter. I contacted several web scraper developers, and they kindly provided me with detailed answers that allowed me to bring the following summary to your attention:
Visual Web Ripper

An input data source can be used to provide a list of input values to a data extraction project. A data extraction project will be run once for each row of input values. You can find the additional information here.
Web Content Extractor

You can use the -at”filename” command line option to add new URLs from TXT or CSV file:

    WCExtractor.exe projectfile -at”filename” -s

projectfile: the file name of the project (*.wcepr) to open.
filename – the file name of the CSV or TXT file that contains URLs separated by newlines.
-s – starts the extraction process

You can find some options and examples here.
Mozenda

Since Mozenda is cloud-based, the external data needs to be loaded up into the user’s Mozenda account. That data can then be easily used as part of the data extracting process. You can construct URLs, search for strings that match your inputs, or carry through several data fields from an input collection and add data to it as part of your output. The easiest way to get input data from an external source is to use the API to populate data into a Mozenda collection (in the user’s account). You can also input data in the Mozenda web console by importing a .csv file or importing one through our agent building tool.

Once the data is loaded into the cloud, you simply initiate building a Mozenda web agent and refer to that Data list. By using the Load page action and the variable from the inputs, you can construct a URL like http://www.amazon.com/gp/product/%asin%.
Helium Scraper

Here is a video showing how to do this with Helium Scraper:


The video shows how to use the input data as URLs and as search terms. There are many other ways you could use this data, way too many to fit in a video. Also, if you know SQL, you could run a query to get the data directly from an external MS Access database like
SELECT * FROM [MyTable] IN "C:\MyDatabase.mdb"

Note that the database needs to be a “.mdb” file.
WebSundew Data Extractor
Basically this allows using input data from external data sources. This may be CSV, Excel file or a Database (MySQL, MSSQL, etc). Here you can see how to do this in the case of an external file, but you can do it with a database in a similar way (you just need to write an SQL script that returns the necessary data).
In addition to passing URLs from the external sources you can pass other input parameters as well (input fields, for example).
Screen Scraper

Screen Scraper is really designed to be interoperable with all sorts of databases. We have composed a separate article where you can find a tutorial and a sample project about scraping Amazon products based on a list of their ASINs.


Source: http://extract-web-data.com/using-external-input-data-in-off-the-shelf-web-scrapers/

Using External Input Data in Off-the-shelf Web Scrapers

There is a question I’ve wanted to shed some light upon for a long time already: “What if I need to scrape several URL’s based on data in some external database?“.

For example, recently one of our visitors asked a very good question (thanks, Ed):

    “I have a large list of amazon.com asin. I would like to scrape 10 or so fields for each asin. Is there any web scraping software available that can read each asin from a database and form the destination url to be scraped like http://www.amazon.com/gp/product/{asin} and scrape the data?”

This question impelled me to investigate this matter. I contacted several web scraper developers, and they kindly provided me with detailed answers that allowed me to bring the following summary to your attention:
Visual Web Ripper

An input data source can be used to provide a list of input values to a data extraction project. A data extraction project will be run once for each row of input values. You can find the additional information here.
Web Content Extractor

You can use the -at”filename” command line option to add new URLs from TXT or CSV file:

    WCExtractor.exe projectfile -at”filename” -s

projectfile: the file name of the project (*.wcepr) to open.
filename – the file name of the CSV or TXT file that contains URLs separated by newlines.
-s – starts the extraction process

You can find some options and examples here.
Mozenda

Since Mozenda is cloud-based, the external data needs to be loaded up into the user’s Mozenda account. That data can then be easily used as part of the data extracting process. You can construct URLs, search for strings that match your inputs, or carry through several data fields from an input collection and add data to it as part of your output. The easiest way to get input data from an external source is to use the API to populate data into a Mozenda collection (in the user’s account). You can also input data in the Mozenda web console by importing a .csv file or importing one through our agent building tool.

Once the data is loaded into the cloud, you simply initiate building a Mozenda web agent and refer to that Data list. By using the Load page action and the variable from the inputs, you can construct a URL like http://www.amazon.com/gp/product/%asin%.
Helium Scraper

Here is a video showing how to do this with Helium Scraper:


The video shows how to use the input data as URLs and as search terms. There are many other ways you could use this data, way too many to fit in a video. Also, if you know SQL, you could run a query to get the data directly from an external MS Access database like
SELECT * FROM [MyTable] IN "C:\MyDatabase.mdb"

Note that the database needs to be a “.mdb” file.
WebSundew Data Extractor
Basically this allows using input data from external data sources. This may be CSV, Excel file or a Database (MySQL, MSSQL, etc). Here you can see how to do this in the case of an external file, but you can do it with a database in a similar way (you just need to write an SQL script that returns the necessary data).
In addition to passing URLs from the external sources you can pass other input parameters as well (input fields, for example).
Screen Scraper

Screen Scraper is really designed to be interoperable with all sorts of databases. We have composed a separate article where you can find a tutorial and a sample project about scraping Amazon products based on a list of their ASINs.


Source: http://extract-web-data.com/using-external-input-data-in-off-the-shelf-web-scrapers/

Tuesday 24 September 2013

Selenium IDE and Web Scraping

Selenium is a browser automation framework that includes IDE, Remote Control server and bindings of various flavors including Java, .Net, Ruby, Python and other. In this post we touch on the basic structure of the framework and its application to  Web Scraping.
What is Selenium IDE


Selenium IDE is an integrated development environment for Selenium scripts. It is implemented as a Firefox plugin, and it allows recording browsers’ interactions in order to edit them. This works well for software tests, composing and debugging. The Selenium Remote Control is a server specific for a particular environment; it causes custom scripts to be implemented for controlled browsers. Selenium deploys on Windows, Linux, and iOS. How various Selenium components are supported with major browsers read here.
What does Selenium do and Web Scraping

Basically Selenium automates browsers. This ability is no doubt to be applied to web scraping. Since browsers (and Selenium) support JavaScript, jQuery and other methods working with dynamic content why not use this mix for benefit in web scraping, rather than to try to catch Ajax events with plain code? The second reason for this kind of scrape automation is browser-fasion data access (though today this is emulated with most libraries).

Yes, Selenium works to automate browsers, but how to control Selenium from a custom script to automate a browser for web scraping? There are Selenium PHP and other language libraries (bindings) providing for scripts to call and use Selenium. It is possible to write Selenium clients (using the libraries) in almost any language we prefer, for example Perl, Python, Java, PHP etc. Those libraries (API), along with a server, the Java written server that invokes browsers for actions, constitute the Selenum RC (Remote Control). Remote Control automatically loads the Selenium Core into the browser to control it. For more details in Selenium components refer to here.


A tough scrape task for programmer

“…cURL is good, but it is very basic.  I need to handle everything manually; I am creating HTTP requests by hand.
This gets difficult – I need to do a lot of work to make sure that the requests that I send are exactly the same as the requests that a browser would
send, both for my sake and for the website’s sake. (For my sake
because I want to get the right data, and for the website’s sake
because I don’t want to cause error messages or other problems on their site because I sent a bad request that messed with their web application).  And if there is any important javascript, I need to imitate it with PHP.
It would be a great benefit to me to be able to control a browser like Firefox with my code. It would solve all my problems regarding the emulation of a real browser…
it seems that Selenium will allow me to do this…” -Ryan S

Yes, that’s what we will consider below.
Scrape with Selenium

In order to create scripts that interact with the Selenium Server (Selenium RC, Selenium Remote Webdriver) or create local Selenium WebDriver script, there is the need to make use of language-specific client drivers (also called Formatters, they are included in the selenium-ide-1.10.0.xpi package). The Selenium servers, drivers and bindings are available at Selenium download page.
The basic recipe for scrape with Selenium:

    Use Chrome or Firefox browsers
    Get Firebug or Chrome Dev Tools (Cntl+Shift+I) in action.
    Install requirements (Remote control or WebDriver, libraries and other)
    Selenium IDE : Record a ‘test’ run thru a site, adding some assertions.
    Export as a Python (other language) script.
    Edit it (loops, data extraction, db input/output)
    Run script for the Remote Control

The short intro Slides for the scraping of tough websites with Python & Selenium are here (as Google Docs slides) and here (Slide Share).
Selenium components for Firefox installation guide

For how to install the Selenium IDE to Firefox see  here starting at slide 21. The Selenium Core and Remote Control installation instructions are there too.
Extracting for dynamic content using jQuery/JavaScript with Selenium

One programmer is doing a similar thing …

1. launch a selenium RC (remote control) server
2. load a page
3. inject the jQuery script
4. select the interested contents using jQuery/JavaScript
5. send back to the PHP client using JSON.

He particularly finds it quite easy and convenient to use jQuery for
screen scraping, rather than using PHP/XPath.
Conclusion

The Selenium IDE is the popular tool for browser automation, mostly for its software testing application, yet also in that Web Scraping techniques for tough dynamic websites may be implemented with IDE along with the Selenium Remote Control server. These are the basic steps for it:

    Record the ‘test‘ browser behavior in IDE and export it as the custom programming language script
    Formatted language script runs on the Remote Control server that forces browser to send HTTP requests and then script catches the Ajax powered responses to extract content.

Selenium based Web Scraping is an easy task for small scale projects, but it consumes a lot of memory resources, since for each request it will launch a new browser instance.



Source: http://extract-web-data.com/selenium-ide-and-web-scraping/

Business Intelligence Data Mining

Data mining can be technically defined as the automated extraction of hidden information from large databases for predictive analysis. In other words, it is the retrieval of useful information from large masses of data, which is also presented in an analyzed form for specific decision-making.

Data mining requires the use of mathematical algorithms and statistical techniques integrated with software tools. The final product is an easy-to-use software package that can be used even by non-mathematicians to effectively analyze the data they have. Data Mining is used in several applications like market research, consumer behavior, direct marketing, bioinformatics, genetics, text analysis, fraud detection, web site personalization, e-commerce, healthcare, customer relationship management, financial services and telecommunications.

Business intelligence data mining is used in market research, industry research, and for competitor analysis. It has applications in major industries like direct marketing, e-commerce, customer relationship management, healthcare, the oil and gas industry, scientific tests, genetics, telecommunications, financial services and utilities. BI uses various technologies like data mining, scorecarding, data warehouses, text mining, decision support systems, executive information systems, management information systems and geographic information systems for analyzing useful information for business decision making.

Business intelligence is a broader arena of decision-making that uses data mining as one of the tools. In fact, the use of data mining in BI makes the data more relevant in application. There are several kinds of data mining: text mining, web mining, social networks data mining, relational databases, pictorial data mining, audio data mining and video data mining, that are all used in business intelligence applications.

Some data mining tools used in BI are: decision trees, information gain, probability, probability density functions, Gaussians, maximum likelihood estimation, Gaussian Baves classification, cross-validation, neural networks, instance-based learning /case-based/ memory-based/non-parametric, regression algorithms, Bayesian networks, Gaussian mixture models, K-means and hierarchical clustering, Markov models and so on.




Source: http://ezinearticles.com/?Business-Intelligence-Data-Mining&id=196648

Monday 23 September 2013

Beneficial Data Collection Services

Internet is becoming the biggest source for information gathering. Varieties of search engines are available over the World Wide Web which helps in searching any kind of information easily and quickly. Every business needs relevant data for their decision making for which market research plays a crucial role. One of the services booming very fast is the data collection services. This data mining service helps in gathering relevant data which is hugely needed for your business or personal use.

Traditionally, data collection has been done manually which is not very feasible in case of bulk data requirement. Although people still use manual copying and pasting of data from Web pages or download a complete Web site which is shear wastage of time and effort. Instead, a more reliable and convenient method is automated data collection technique. There is a web scraping techniques that crawls through thousands of web pages for the specified topic and simultaneously incorporates this information into a database, XML file, CSV file, or other custom format for future reference. Few of the most commonly used web data extraction processes are websites which provide you information about the competitor's pricing and featured data; spider is a government portal that helps in extracting the names of citizens for an investigation; websites which have variety of downloadable images.

Aside, there is a more sophisticated method of automated data collection service. Here, you can easily scrape the web site information on daily basis automatically. This method greatly helps you in discovering the latest market trends, customer behavior and the future trends. Few of the major examples of automated data collection solutions are price monitoring information; collection of data of various financial institutions on a daily basis; verification of different reports on a constant basis and use them for taking better and progressive business decisions.

While using these service make sure you use the right procedure. Like when you are retrieving data download it in a spreadsheet so that the analysts can do the comparison and analysis properly. This will also help in getting accurate results in a faster and more refined manner.



Source: http://ezinearticles.com/?Beneficial-Data-Collection-Services&id=5879822

Friday 20 September 2013

How Web Data Extraction Services Will Save Your Time and Money by Automatic Data Collection

Data scrape is the process of extracting data from web by using software program from proven website only. Extracted data any one can use for any purposes as per the desires in various industries as the web having every important data of the world. We provide best of the web data extracting software. We have the expertise and one of kind knowledge in web data extraction, image scrapping, screen scrapping, email extract services, data mining, web grabbing.

Who can use Data Scraping Services?

Data scraping and extraction services can be used by any organization, company, or any firm who would like to have a data from particular industry, data of targeted customer, particular company, or anything which is available on net like data of email id, website name, search term or anything which is available on web. Most of time a marketing company like to use data scraping and data extraction services to do marketing for a particular product in certain industry and to reach the targeted customer for example if X company like to contact a restaurant of California city, so our software can extract the data of restaurant of California city and a marketing company can use this data to market their restaurant kind of product. MLM and Network marketing company also use data extraction and data scrapping services to to find a new customer by extracting data of certain prospective customer and can contact customer by telephone, sending a postcard, email marketing, and this way they build their huge network and build large group for their own product and company.

We helped many companies to find particular data as per their need for example.

Web Data Extraction

Web pages are built using text-based mark-up languages (HTML and XHTML), and frequently contain a wealth of useful data in text form. However, most web pages are designed for human end-users and not for ease of automated use. Because of this, tool kits that scrape web content were created. A web scraper is an API to extract data from a web site. We help you to create a kind of API which helps you to scrape data as per your need. We provide quality and affordable web Data Extraction application

Data Collection

Normally, data transfer between programs is accomplished using info structures suited for automated processing by computers, not people. Such interchange formats and protocols are typically rigidly structured, well-documented, easily parsed, and keep ambiguity to a minimum. Very often, these transmissions are not human-readable at all. That's why the key element that distinguishes data scraping from regular parsing is that the output being scraped was intended for display to an end-user.

Email Extractor

A tool which helps you to extract the email ids from any reliable sources automatically that is called a email extractor. It basically services the function of collecting business contacts from various web pages, HTML files, text files or any other format without duplicates email ids.

Screen scrapping

Screen scraping referred to the practice of reading text information from a computer display terminal's screen and collecting visual data from a source, instead of parsing data as in web scraping.

Data Mining Services

Data Mining Services is the process of extracting patterns from information. Datamining is becoming an increasingly important tool to transform the data into information. Any format including MS excels, CSV, HTML and many such formats according to your requirements.

Web spider

A Web spider is a computer program that browses the World Wide Web in a methodical, automated manner or in an orderly fashion. Many sites, in particular search engines, use spidering as a means of providing up-to-date data.

Web Grabber

Web grabber is just a other name of the data scraping or data extraction.

Web Bot

Web Bot is software program that is claimed to be able to predict future events by tracking keywords entered on the Internet. Web bot software is the best program to pull out articles, blog, relevant website content and many such website related data We have worked with many clients for data extracting, data scrapping and data mining they are really happy with our services we provide very quality services and make your work data work very easy and automatic.




Source: http://ezinearticles.com/?How-Web-Data-Extraction-Services-Will-Save-Your-Time-and-Money-by-Automatic-Data-Collection&id=5159023

Thursday 19 September 2013

Basics of Web Data Mining and Challenges in Web Data Mining Process

Today World Wide Web is flooded with billions of static and dynamic web pages created with programming languages such as HTML, PHP and ASP. Web is great source of information offering a lush playground for data mining. Since the data stored on web is in various formats and are dynamic in nature, it's a significant challenge to search, process and present the unstructured information available on the web.

Complexity of a Web page far exceeds the complexity of any conventional text document. Web pages on the internet lack uniformity and standardization while traditional books and text documents are much simpler in their consistency. Further, search engines with their limited capacity can not index all the web pages which makes data mining extremely inefficient.

Moreover, Internet is a highly dynamic knowledge resource and grows at a rapid pace. Sports, News, Finance and Corporate sites update their websites on hourly or daily basis. Today Web reaches to millions of users having different profiles, interests and usage purposes. Every one of these requires good information but don't know how to retrieve relevant data efficiently and with least efforts.

It is important to note that only a small section of the web possesses really useful information. There are three usual methods that a user adopts when accessing information stored on the internet:

• Random surfing i.e. following large numbers of hyperlinks available on the web page.
• Query based search on Search Engines - use Google or Yahoo to find relevant documents (entering specific keywords queries of interest in search box)
• Deep query searches i.e. fetching searchable database from eBay.com's product search engines or Business.com's service directory, etc.

To use the web as an effective resource and knowledge discovery researchers have developed efficient data mining techniques to extract relevant data easily, smoothly and cost-effectively.

Should you have any queries regarding Web Data mining processes, please feel free to contact us at info@outsourcingwebresearch.com





Source: http://ezinearticles.com/?Basics-of-Web-Data-Mining-and-Challenges-in-Web-Data-Mining-Process&id=4937441

Tuesday 17 September 2013

Backtesting & Data Mining

Introduction

In this article we'll take a look at two related practices that are widely used by traders called Backtesting and Data Mining. These are techniques that are powerful and valuable if we use them correctly, however traders often misuse them. Therefore, we'll also explore two common pitfalls of these techniques, known as the multiple hypothesis problem and overfitting and how to overcome these pitfalls.

Backtesting

Backtesting is just the process of using historical data to test the performance of some trading strategy. Backtesting generally starts with a strategy that we would like to test, for instance buying GBP/USD when it crosses above the 20-day moving average and selling when it crosses below that average. Now we could test that strategy by watching what the market does going forward, but that would take a long time. This is why we use historical data that is already available.

"But wait, wait!" I hear you say. "Couldn't you cheat or at least be biased because you already know what happened in the past?" That's definitely a concern, so a valid backtest will be one in which we aren't familiar with the historical data. We can accomplish this by choosing random time periods or by choosing many different time periods in which to conduct the test.

Now I can hear another group of you saying, "But all that historical data just sitting there waiting to be analyzed is tempting isn't it? Maybe there are profound secrets in that data just waiting for geeks like us to discover it. Would it be so wrong for us to examine that historical data first, to analyze it and see if we can find patterns hidden within it?" This argument is also valid, but it leads us into an area fraught with danger...the world of Data Mining

Data Mining

Data Mining involves searching through data in order to locate patterns and find possible correlations between variables. In the example above involving the 20-day moving average strategy, we just came up with that particular indicator out of the blue, but suppose we had no idea what type of strategy we wanted to test? That's when data mining comes in handy. We could search through our historical data on GBP/USD to see how the price behaved after it crossed many different moving averages. We could check price movements against many other types of indicators as well and see which ones correspond to large price movements.

The subject of data mining can be controversial because as I discussed above it seems a bit like cheating or "looking ahead" in the data. Is data mining a valid scientific technique? On the one hand the scientific method says that we're supposed to make a hypothesis first and then test it against our data, but on the other hand it seems appropriate to do some "exploration" of the data first in order to suggest a hypothesis. So which is right? We can look at the steps in the Scientific Method for a clue to the source of the confusion. The process in general looks like this:

Observation (data) >>> Hypothesis >>> Prediction >>> Experiment (data)

Notice that we can deal with data during both the Observation and Experiment stages. So both views are right. We must use data in order to create a sensible hypothesis, but we also test that hypothesis using data. The trick is simply to make sure that the two sets of data are not the same! We must never test our hypothesis using the same set of data that we used to suggest our hypothesis. In other words, if you use data mining in order to come up with strategy ideas, make sure you use a different set of data to backtest those ideas.

Now we'll turn our attention to the main pitfalls of using data mining and backtesting incorrectly. The general problem is known as "over-optimization" and I prefer to break that problem down into two distinct types. These are the multiple hypothesis problem and overfitting. In a sense they are opposite ways of making the same error. The multiple hypothesis problem involves choosing many simple hypotheses while overfitting involves the creation of one very complex hypothesis.

The Multiple Hypothesis Problem

To see how this problem arises, let's go back to our example where we backtested the 20-day moving average strategy. Let's suppose that we backtest the strategy against ten years of historical market data and lo and behold guess what? The results are not very encouraging. However, being rough and tumble traders as we are, we decide not to give up so easily. What about a ten day moving average? That might work out a little better, so let's backtest it! We run another backtest and we find that the results still aren't stellar, but they're a bit better than the 20-day results. We decide to explore a little and run similar tests with 5-day and 30-day moving averages. Finally it occurs to us that we could actually just test every single moving average up to some point and see how they all perform. So we test the 2-day, 3-day, 4-day, and so on, all the way up to the 50-day moving average.

Now certainly some of these averages will perform poorly and others will perform fairly well, but there will have to be one of them which is the absolute best. For instance we may find that the 32-day moving average turned out to be the best performer during this particular ten year period. Does this mean that there is something special about the 32-day average and that we should be confident that it will perform well in the future? Unfortunately many traders assume this to be the case, and they just stop their analysis at this point, thinking that they've discovered something profound. They have fallen into the "Multiple Hypothesis Problem" pitfall.

The problem is that there is nothing at all unusual or significant about the fact that some average turned out to be the best. After all, we tested almost fifty of them against the same data, so we'd expect to find a few good performers, just by chance. It doesn't mean there's anything special about the particular moving average that "won" in this case. The problem arises because we tested multiple hypotheses until we found one that worked, instead of choosing a single hypothesis and testing it.

Here's a good classic analogy. We could come up with a single hypothesis such as "Scott is great at flipping heads on a coin." From that, we could create a prediction that says, "If the hypothesis is true, Scott will be able to flip 10 heads in a row." Then we can perform a simple experiment to test that hypothesis. If I can flip 10 heads in a row it actually doesn't prove the hypothesis. However if I can't accomplish this feat it definitely disproves the hypothesis. As we do repeated experiments which fail to disprove the hypothesis, then our confidence in its truth grows.

That's the right way to do it. However, what if we had come up with 1,000 hypotheses instead of just the one about me being a good coin flipper? We could make the same hypothesis about 1,000 different people...me, Ed, Cindy, Bill, Sam, etc. Ok, now let's test our multiple hypotheses. We ask all 1000 people to flip a coin. There will probably be about 500 who flip heads. Everyone else can go home. Now we ask those 500 people to flip again, and this time about 250 will flip heads. On the third flip about 125 people flip heads, on the fourth about 63 people are left, and on the fifth flip there are about 32. These 32 people are all pretty amazing aren't they? They've all flipped five heads in a row! If we flip five more times and eliminate half the people each time on average, we will end up with 16, then 8, then 4, then 2 and finally one person left who has flipped ten heads in a row. It's Bill! Bill is a "fantabulous" flipper of coins! Or is he?

Well we really don't know, and that's the point. Bill may have won our contest out of pure chance, or he may very well be the best flipper of heads this side of the Andromeda galaxy. By the same token, we don't know if the 32-day moving average from our example above just performed well in our test by pure chance, or if there is really something special about it. But all we've done so far is to find a hypothesis, namely that the 32-day moving average strategy is profitable (or that Bill is a great coin flipper). We haven't actually tested that hypothesis yet.

So now that we understand that we haven't really discovered anything significant yet about the 32-day moving average or about Bill's ability to flip coins, the natural question to ask is what should we do next? As I mentioned above, many traders never realize that there is a next step required at all. Well, in the case of Bill you'd probably ask, "Aha, but can he flip ten heads in a row again?" In the case of the 32-day moving average, we'd want to test it again, but certainly not against the same data sample that we used to choose that hypothesis. We would choose another ten-year period and see if the strategy worked just as well. We could continue to do this experiment as many times as we wanted until our supply of new ten-year periods ran out. We refer to this as "out of sample testing", and it's the way to avoid this pitfall. There are various methods of such testing, one of which is "cross validation", but we won't get into that much detail here.

Overfitting

Overfitting is really a kind of reversal of the above problem. In the multiple hypothesis example above, we looked at many simple hypotheses and picked the one that performed best in the past. In overfitting we first look at the past and then construct a single complex hypothesis that fits well with what happened. For example if I look at the USD/JPY rate over the past 10 days, I might see that the daily closes did this:

up, up, down, up, up, up, down, down, down, up.

Got it? See the pattern? Yeah, neither do I actually. But if I wanted to use this data to suggest a hypothesis, I might come up with...

My amazing hypothesis:

If the closing price goes up twice in a row then down for one day, or if it goes down for three days in a row we should buy,

but if the closing price goes up three days in a row we should sell,

but if it goes up three days in a row and then down three days in a row we should buy.

Huh? Sounds like a whacky hypothesis right? But if we had used this strategy over the past 10 days, we would have been right on every single trade we made! The "overfitter" uses backtesting and data mining differently than the "multiple hypothesis makers" do. The "overfitter" doesn't come up with 400 different strategies to backtest. No way! The "overfitter" uses data mining tools to figure out just one strategy, no matter how complex, that would have had the best performance over the backtesting period. Will it work in the future?

Not likely, but we could always keep tweaking the model and testing the strategy in different samples (out of sample testing again) to see if our performance improves. When we stop getting performance improvements and the only thing that's rising is the complexity of our model, then we know we've crossed the line into overfitting.

Conclusion

So in summary, we've seen that data mining is a way to use our historical price data to suggest a workable trading strategy, but that we have to be aware of the pitfalls of the multiple hypothesis problem and overfitting. The way to make sure that we don't fall prey to these pitfalls is to backtest our strategy using a different dataset than the one we used during our data mining exploration. We commonly refer to this as "out of sample testing".

Scott Percival

October 2006

Scott Percival is the Director of Research for the FOREX Statistical Research Center at Market-geeks.com, a site which is devoted to using mathematics and the scientific method to study the behavior of prices in the FOREX market. Mr. Percival has a degree in Civil Engineering from Northeastern University, and has worked as a Registered Representative and trading instructor at Fidelity Investments. He is currently working toward the goal of becoming a full time FOREX trader.




Source: http://ezinearticles.com/?Backtesting-and-Data-Mining&id=341468

Monday 16 September 2013

What You Need to Know About Popular Software - Data Mining Software

Simply put, data mining is the process of extracting hidden patterns from the organization's database. Over the years it has become a more and more important tool for adding value to the company's databases. Applications include business, medicine, science and engineering, and combating terrorism. This technique actually involves two very different processes, knowledge discovery and prediction. Knowledge discovery provides users with explicit information that in a sense is sitting in the database but has not been exposed. Prediction is an attempt to read into the future.

Data mining relies on the use of real-world data. To understand how this technology works we need first to review some basic concepts. Data are any facts whether numeric or textual that can be processed by a computer. The categories include operational, non-operational, and metadata. Operational or transactional elements include accounting, cost, inventory, and sales facts and figures. Non-operational elements include forecasts and information describing competitors and the industry as a whole. Metadata describes the data itself; it is required to set up and run the databases.

Data mining commonly performs four interrelated tasks: association rule learning, classification, clustering, and regression. Let's examine each in turn. Association rule learning, also known as market basket analysis, searches for relationships between variables. A classic example is a supermarket determining which products customers buy together. Customers who buy onions and potatoes often buy beef. Classification arranges data into predefined groups. This technology can do so in a sophisticated manner. In a related technique known as clustering the groups are not predefined. Regression involves data modeling.

It has been alleged that data mining has been used both in the United States and elsewhere to combat terrorism. As always in such cases, those who know don't say, and those who say don't know. One may surmise that these anti-terrorist applications look for unusual patterns. Many credit card holders have been contacted when their spending patterns changed substantially.

Data mining has become an important feature in many customer relationship management applications. For example, this technology enables companies to focus their marketing efforts on likely customers rather than trying to sell to everyone out there. Human resources applications help companies recruit and manage employees. We have already mentioned market basket analysis. Strategic enterprise management applications help a company transform corporate targets and goals into operational decisions such as hiring and factory scheduling.

Given its great power, many people are concerned with the human rights and privacy issues around data mining. Sophisticated applications could work its way around privacy safeguards. As the technology becomes more widespread and less expensive, these issues may become more urgent. As data is summarized the wrong conclusions can be drawn. This problem not only affects human rights but also the company's bottom line.




Source: http://ezinearticles.com/?What-You-Need-to-Know-About-Popular-Software---Data-Mining-Software&id=1920655

Saturday 14 September 2013

Data Mining As a Process

The data mining process is also known as knowledge discovery. It can be defined as the process of analyzing data from different perspectives and then summarizing the data into useful information in order to improve the revenue and cut the costs. The process enables categorization of data and the summary of the relationships is identified. When viewed in technical terms, the process can be defined as finding correlations or patterns in large relational databases. In this article, we look at how data mining works its innovations, the needed technological infrastructures and the tools such as phone validation.

Data mining is a relatively new term used in the data collection field. The process is very old but has evolved over the time. Companies have been able to use computers to shift over the large amounts of data for many years. The process has been used widely by the marketing firms in conducting market research. Through analysis, it is possible to define the regularity of customers shopping. How the items are bought. It is also possible to collect information needed for the establishment of revenue increase platform. Nowadays, what aides the process is the affordable and easy disk storage, computer processing power and applications developed.

Data extraction is commonly used by the companies that are after maintaining a stronger customer focus no matter where they are engaged. Most companies are engaged in retail, marketing, finance or communication. Through this process, it is possible to determine the different relationships between the varying factors. The varying factors include staffing, product positioning, pricing, social demographics, and market competition.

A data-mining program can be used. It is important note that the data mining applications vary in types. Some of the types include machine learning, statistical, and neural networks. The program is interested in any of the following four types of relationships: clusters (in this case the data is grouped in relation to the consumer preferences or logical relationships), classes (in this the data is stored and finds its use in the location of data in the per-determined groups), sequential patterns (in this case the data is used to estimate the behavioral patterns and patterns), and associations (data is used to identify associations).

In knowledge discovery, there are different levels of data analysis and they include genetic algorithms, artificial neural networks, nearest neighbor method, data visualization, decision trees, and rule induction. The level of analysis used depends on the data that is visualized and the output needed.

Nowadays, data extraction programs are readily available in different sizes from PC platforms, mainframe, and client/server. In the enterprise-wide uses, size ranges from the 10 GB to more than 11 TB. It is important to note that two crucial technological drivers are needed and are query complexity and, database size. When more data is needed to be processed and maintained, then a more powerful system is needed that can handle complex and greater queries.

With the emergence of professional data mining companies, the costs associated with process such as web data extraction, web scraping, web crawling and web data mining have greatly being made affordable.




Source: http://ezinearticles.com/?Data-Mining-As-a-Process&id=7181033

Friday 13 September 2013

An Easy Way For Data Extraction

There are so many data scraping tools are available in internet. With these tools you can you download large amount of data without any stress. From the past decade, the internet revolution has made the entire world as an information center. You can obtain any type of information from the internet. However, if you want any particular information on one task, you need search more websites. If you are interested in download all the information from the websites, you need to copy the information and pate in your documents. It seems a little bit hectic work for everyone. With these scraping tools, you can save your time, money and it reduces manual work.

The Web data extraction tool will extract the data from the HTML pages of the different websites and compares the data. Every day, there are so many websites are hosting in internet. It is not possible to see all the websites in a single day. With these data mining tool, you are able to view all the web pages in internet. If you are using a wide range of applications, these scraping tools are very much useful to you.

The data extraction software tool is used to compare the structured data in internet. There are so many search engines in internet will help you to find a website on a particular issue. The data in different sites is appears in different styles. This scraping expert will help you to compare the date in different site and structures the data for records.

And the web crawler software tool is used to index the web pages in the internet; it will move the data from internet to your hard disk. With this work, you can browse the internet much faster when connected. And the important use of this tool is if you are trying to download the data from internet in off peak hours. It will take a lot of time to download. However, with this tool you can download any data from internet at fast rate.There is another tool for business person is called email extractor. With this toll, you can easily target the customers email addresses. You can send advertisement for your product to the targeted customers at any time. This the best tool to find the database of the customers.

However, there are some more scraping tolls are available in internet. And also some of esteemed websites are providing the information about these tools. You download these tools by paying a nominal amount.

Maulik Jyotishi is the owner of scrapping expert website a company which specializes in web spider and data collection from web pages.




Source: http://ezinearticles.com/?An-Easy-Way-For-Data-Extraction&id=3517104

Thursday 12 September 2013

RFM - A Precursor to Data Mining

RFM in Action

RFM was initially utilized by marketers in the B-2-C space - specifically in industries like Cataloging, Insurance, Retail Banking, Telecommunications and others. There are a number of scoring approaches that can be used with RFM. We'll take a look at three:

RFM - Basic Ranking
RFM - Within Parent Cell Ranking
RFM - Weighted Cell Ranking

Each approach has experienced proponents that argue one over the other. The point is to start somewhere and experiment to find the one that works best for your company and your customer base. Let's look at a few examples.

RFM - Basic Ranking

This approach involves scoring customers based on each RFM factor separately. It begins with sorting your customers based on Recency, i.e., the number of days or months since their last purchase. Once sorted in ascending order (most recent purchasers at the top), the customers are then split into quintiles, or five equal groups. The customers in the top quintile represent the 20% of your customers that most recently purchased from you.

This process is then undertaken for Frequency and Monetary as well. Each customer is in one of the five cells for R, F, and M

Experience tells us that the best prospects for an upcoming campaign are those customers that are in Quintile 5 for each factor - those customers that have purchased most recently, most frequently and have spent the most money. In fact, a common approach to creating an aggregated score is to concatenate the individual RFM scores together resulting in 125 cells (5x5x5).

A customer's score can range from 555 being the highest, to 111 being the lowest.

RFM - Within Parent Cell Ranking

This approach is advocated by Arthur Middleton Hughes - one of the biggest proponents of RFM analysis. It begins like the one above, i.e., all customer are initially grouped into 5 cells based on Recency. The next step takes customers in a given Recency cell - say cell number 5, and then ranks those customers based on Frequency. Then customers in the 55 (RF) cell are ranked by monetary value.

RFM - Weighted Ranking

Weightings used by RFM practitioners vary. For example some advocate adding the RFM score together - thus giving equal weight to each factor. Consequently, scores can range from 15 (5+5+5) to 3 (1+1+1). Another weighting arrangement often used is, 3xR + 2xF + 1xM. In this case, scores can range from 30 to 3.

So which to use? In reality, there are many other permutations of approaches that are being used today. Best-practice marketing analytics requires a fine mix of mathematical and statistical science, creativity and experimentation. Bottom line, test multiple scoring methods to see which works best for your unique customer base.

Establishing a Score Threshold

After a test or production campaign, you will find that some of the cells were profitable while some were not. Let's turn to a case study to see how you can establish a threshold that will help maximize your profitability. This study comes from Professor Charlotte Mason of the Kenan-Flagler Business School and utilizes a real-life marketing study performed by The BookBinders Book Club (Source:Recency, Frequency and Monetary (RFM) Analysis, Professor Charlotte Mason, Kenan-Flagler Business School, University of North Carolina, 2003).

BookBinders is a specialty book seller that utilizes multiple marketing channels. BookBinders traditionally did mass marketing and wanted to test the power of RFM. To do so, they initially did a random mailing to 50,000 customers. The customers were mailed an offer to purchase The Art History of Florence. Response data was captured and a "post-RFM" analysis was completed. This "post analysis" was done by freezing the files of the 50,000 test customers prior to the actual test offer. Thus, the impact of this test campaign did not effect the analysis by coding many (the actual buyers) of the 50,000 test subjects as the most recent purchasers. The results firmly support the use of RFM as a highly effective segmentation approach.

Purchased the book = yes; months since last purchase = 8.61; total # purchases = 5.22; dollars spent = 234.30
Purchased the book = no; Months since last purchase = 12.73; total # purchases = 3.76; dollars spent = 205.74

Customers that purchased the book were more recent purchasers, more frequent purchasers and had spent the most with BookBinders.

The response rate for the top decile (18%) was twice the response rate associated with the 5th decile (9%).

Results from this test were then used by BookBinders to identify which of their remaining customers should receive the same mailing. BookBinders used a breakeven response rate calculation to determine the appropriate RFM cells to mail.

The following cost information was used as input:

Cost per Mail-piece $0.50

Selling Price $18.00

BookBinders Book Cost $9.00

Shipping Costs $3.00

Breakeven is achieved when the cost of the mailing is equal to the net profit from a sale. In this case:

Breakeven = (cost to mail the offer/net profit from a single sale)

= $0.50/($18-9-3)

= ($0.50/6)

= 8.3% = Breakeven Response rate

So, according to the test offer, profit can be obtained by mailing to cells that exhibited a response rate of greater than 8.3%

RFM dramatically improved profitability by capturing 71% of buyers (3,214/4,522) while mailing only 46% of their customers (22,731/50,000). And the return on marketing expenditures using RFM was more than eight times (69.7/8.5) that of a mass mailing.

Number of Cells and Cell Size Considerations

As previously mentioned, RFM was initially utilized by companies that operated in the B-to-C marketplace and generally possessed a very large number of customers. The idea of generating 125 cells using quintiles for R, F and M has been a very good practice as an initial modeling effort. But what if you are a B-to-B marketer with relatively fewer customers? Or, what if you are a B-to-C marketer with an extremely large file with millions of customers? The answer is to use the same approach that is used in data mining -- be flexible and experiment.

Establishing a minimum test cell size is a good place to start. Arthur Hughes recommends the following formula:

Test Cell Size = 4 / Breakeven Response Rate.

The Breakeven Response Rate was addressed above in the BookBinders case study. The number "4" is a number that Hughes has found works successfully based on many studies he has performed. BookBinders Breakeven Response Rate was 8.3%. Using the above formula, you would need a minimum of 48 customers in each cell (4/0.083). BookBinders actually had 400 customers per cell, so they had more than adequate comfort in the significance of their test. In reality, BookBinders could have created as many as 1,041 cells if they were comfortable using the minimum of 48 per cell. As an example, they could have used deciles as opposed to quintiles and established 1,000 cells (10 x 10 x 10). The more cells the finer the analysis, but of course the law of diminishing returns will arise.

Other weighting considerations can be used for small files. If your Breakeven Response Rate is 3%, your minimum cell size would be 133 customers (4/0.03). Therefore, if you have 12,000 customers you could have about 90 cells (12,000/133). As such, a 5 x 5 x 4 (100 cells) or a 5 x 4 x 4 (80 cells) approach may be appropriate.

Conclusions

RFM, BI and data mining are all part of an evolutionary path that is common to many marketing organizations. While RFM has been practiced for over 40 years, it still holds great value for many organizations. Its merits include:

- Simplicity - easy to understand and implement

- Relatively low cost

- Proven ROI

- The demand on data requirements are relatively low in terms of variables required and the number of records

- Once utilized, it sets up a broader foundation (from an infrastructure and business case perspective) to undertake more sophisticated data mining efforts

RFM's challenges include:

- Contact fatigue can be a problem for the higher scoring customers. A high level cross-campaign communication strategy can help prevent this.

- Your lowest scoring customers may never hear from you. Again, a cross-campaign communications plan should ensure that all of your customers are communicated with periodically to ensure low scoring customers are given the opportunity to meet their potential. Also, data mining and the prediction of customer lifetime value can help address this shortcoming.

- RFM includes only three variables. Data mining typically finds RFM-based variables to be quite important in response models. But there are additional variables that data mining typically use (e.g., detailed transaction, demographic and firmographic) that help produce improved results. Moreover, data mining techniques can also increase response rates via the development of richer segment/cell profiles that can be used to vary offer content and incentives.

As stated before, successful marketing efforts require analytics and experimentation. RFM has proven itself as an effective approach to predicting response and improving profitability. It can be an important stage in your company's evolution in marketing analytics.




Source: http://ezinearticles.com/?RFM---A-Precursor-to-Data-Mining&id=1962283

Wednesday 11 September 2013

Digging Up Dollars With Data Mining - An Executive's Guide

Introduction

Traditionally, organizations use data tactically - to manage operations. For a competitive edge, strong organizations use data strategically - to expand the business, to improve profitability, to reduce costs, and to market more effectively. Data mining (DM) creates information assets that an organization can leverage to achieve these strategic objectives.

In this article, we address some of the key questions executives have about data mining. These include:

    What is data mining?
    What can it do for my organization?
    How can my organization get started?

Business Definition of Data Mining

Data mining is a new component in an enterprise's decision support system (DSS) architecture. It complements and interlocks with other DSS capabilities such as query and reporting, on-line analytical processing (OLAP), data visualization, and traditional statistical analysis. These other DSS technologies are generally retrospective. They provide reports, tables, and graphs of what happened in the past. A user who knows what she's looking for can answer specific questions like: "How many new accounts were opened in the Midwest region last quarter," "Which stores had the largest change in revenues compared to the same month last year," or "Did we meet our goal of a ten-percent increase in holiday sales?"

We define data mining as "the data-driven discovery and modeling of hidden patterns in large volumes of data." Data mining differs from the retrospective technologies above because it produces models - models that capture and represent the hidden patterns in the data. With it, a user can discover patterns and build models automatically, without knowing exactly what she's looking for. The models are both descriptive and prospective. They address why things happened and what is likely to happen next. A user can pose "what-if" questions to a data-mining model that can not be queried directly from the database or warehouse. Examples include: "What is the expected lifetime value of every customer account," "Which customers are likely to open a money market account," or "Will this customer cancel our service if we introduce fees?"

The information technologies associated with DM are neural networks, genetic algorithms, fuzzy logic, and rule induction. It is outside the scope of this article to elaborate on all of these technologies. Instead, we will focus on business needs and how data mining solutions for these needs can translate into dollars.

Mapping Business Needs to Solutions and Profits

What can data mining do for your organization? In the introduction, we described several strategic opportunities for an organization to use data for advantage: business expansion, profitability, cost reduction, and sales and marketing. Let's consider these opportunities very concretely through several examples where companies successfully applied DM.

Expanding your business: Keystone Financial of Williamsport, PA, wanted to expand their customer base and attract new accounts through a LoanCheck offer. To initiate a loan, a recipient just had to go to a Keystone branch and cash the LoanCheck. Keystone introduced the $5000 LoanCheck by mailing a promotion to existing customers.

The Keystone database tracks about 300 characteristics for each customer. These characteristics include whether the person had already opened loans in the past two years, the number of active credit cards, the balance levels on those cards, and finally whether or not they responded to the $5000 LoanCheck offer. Keystone used data mining to sift through the 300 customer characteristics, find the most significant ones, and build a model of response to the LoanCheck offer. Then, they applied the model to a list of 400,000 prospects obtained from a credit bureau.

By selectively mailing to the best-rated prospects determined by the DM model, Keystone generated $1.6M in additional net income from 12,000 new customers.

Reducing costs: Empire Blue Cross/Blue Shield is New York State's largest health insurer. To compete with other healthcare companies, Empire must provide quality service and minimize costs. Attacking costs in the form of fraud and abuse is a cornerstone of Empire's strategy, and it requires considerable investigative skill as well as sophisticated information technology.

The latter includes a data mining application that profiles each physician in the Empire network based on patient claim records in their database. From the profile, the application detects subtle deviations in physician behavior relative to her/his peer group. These deviations are reported to fraud investigators as a "suspicion index." A physician who performs a high number of procedures per visit, charges 40% more per patient, or sees many patients on the weekend would be flagged immediately from the suspicion index score.

What has this DM effort returned to Empire? In the first three years, they realized fraud-and-abuse savings of $29M, $36M, and $39M respectively.

Improving sales effectiveness and profitability: Pharmaceutical sales representatives have a broad assortment of tools for promoting products to physicians. These tools include clinical literature, product samples, dinner meetings, teleconferences, golf outings, and more. Knowing which promotions will be most effective with which doctors is extremely valuable since wrong decisions can cost the company hundreds of dollars for the sales call and even more in lost revenue.

The reps for a large pharmaceutical company collectively make tens of thousands of sales calls. One drug maker linked six months of promotional activity with corresponding sales figures in a database, which they then used to build a predictive model for each doctor. The data-mining models revealed, for instance, that among six different promotional alternatives, only two had a significant impact on the prescribing behavior of physicians. Using all the knowledge embedded in the data-mining models, the promotional mix for each doctor was customized to maximize ROI.

Although this new program was rolled out just recently, early responses indicate that the drug maker will exceed the $1.4M sales increase originally projected. Given that this increase is generated with no new promotional spending, profits are expected to increase by a similar amount.

Looking back at this set of examples, we must ask, "Why was data mining necessary?" For Keystone, response to the loan offer did not exist in the new credit bureau database of 400,000 potential customers. The model predicted the response given the other available customer characteristics. For Empire, the suspicion index quantified the differences between physician practices and peer (model) behavior. Appropriate physician behavior was a multi-variable aggregate produced by data mining - once again, not available in the database. For the drug maker, the promotion and sales databases contained the historical record of activity. An automated data mining method was necessary to model each doctor and determine the best combination of promotions to increase future sales.

Getting Started

In each case presented above, data mining yielded significant benefits to the business. Some were top-line results that increased revenues or expanded the customer base. Others were bottom-line improvements resulting from cost-savings and enhanced productivity. The natural next question is, "How can my organization get started and begin to realize the competitive advantages of DM?"

In our experience, pilot projects are the most successful vehicles for introducing data mining. A pilot project is a short, well-planned effort to bring DM into an organization. Good pilot projects focus on one very specific business need, and they involve business users up front and throughout the project. The duration of a typical pilot project is one to three months, and it generally requires 4 to 10 people part-time.

The role of the executive in such pilot projects is two-pronged. At the outset, the executive participates in setting the strategic goals and objectives for the project. During the project and prior to roll out, the executive takes part by supervising the measurement and evaluation of results. Lack of executive sponsorship and failure to involve business users are two primary reasons DM initiatives stall or fall short.

In reading this article, perhaps you've developed a vision and want to proceed - to address a pressing business problem by sponsoring a data mining pilot project. Twisting the old adage, we say "just because you should doesn't mean you can." Be aware that a capability assessment needs to be an integral component of a DM pilot project. The assessment takes a critical look at data and data access, personnel and their skills, equipment, and software. Organizations typically underestimate the impact of data mining (and information technology in general) on their people, their processes, and their corporate culture. The pilot project provides a relatively high-reward, low-cost, and low-risk opportunity to quantify the potential impact of DM.

Another stumbling block for an organization is deciding to defer any data mining activity until a data warehouse is built. Our experience indicates that, oftentimes, DM could and should come first. The purpose of the data warehouse is to provide users the opportunity to study customer and market behavior both retrospectively and prospectively. A data mining pilot project can provide important insight into the fields and aggregates that need to be designed into the warehouse to make it really valuable. Further, the cost savings or revenue generation provided by DM can provide bootstrap funding for a data warehouse or related initiatives.

Recapping, in this article we addressed the key questions executives have about data mining - what it is, what the benefits are, and how to get started. Armed with this knowledge, begin with a pilot project. From there, you can continue building the data mining capability in your organization; to expand your business, improve profitability, reduce costs, and market your products more effectively.



Source: http://ezinearticles.com/?Digging-Up-Dollars-With-Data-Mining---An-Executives-Guide&id=6052872

Tuesday 10 September 2013

Top Data Mining Tools

Data mining is important because it means pulling out critical information from vast amounts of data. The key is to find the right tools used for the expressed purposes of examining data from any number of viewpoints and effectively summarize it into a useful data set.

Many of the tools used to organize this data have become computer based and are typically referred to as knowledge discovery tools.

Listed below are the top data mining tools in the industry:

    Insightful Miner - This tool has the best selection of ETL functions of any data mining tool on the market. This allows the merging, appending, sorting and filtering of data.
    SQL Server 2005 Data Mining Add-ins for Office 2007 - These are great add-ins for taking advantage of SQL Server 2005 predictive analytics in Office Excel 2007 and Office Visio 2007. The add-ins Allow you to go through the entire development lifecycle within Excel 2007 by using either a spreadsheet or external data accessible through your SQL Server 2005 Analysis Services instance.
    Rapidminder - Also known as YALE is a pretty comprehensive and arguably world-leading when it comes to an open-source data mining solution. it is widely used from a large number of companies an organizations. Even though it is open-source, this tool, out of the box provides a secure environment and provides enterprise capable support and services so you will not be left out in the cold.

The list is short but ever changing in order to meet the increasing demands of companies to provide useful information from years of data.




Source: http://ezinearticles.com/?Top-Data-Mining-Tools&id=1380551

Monday 9 September 2013

Internet Data Mining - How Does it Help Businesses?

Internet has become an indispensable medium for people to conduct different types of businesses and transactions too. This has given rise to the employment of different internet data mining tools and strategies so that they could better their main purpose of existence on the internet platform and also increase their customer base manifold.

Internet data-mining encompasses various processes of collecting and summarizing different data from various websites or webpage contents or make use of different login procedures so that they could identify various patterns. With the help of internet data-mining it becomes extremely easy to spot a potential competitor, pep up the customer support service on the website and make it more customers oriented.

There are different types of internet data_mining techniques which include content, usage and structure mining. Content mining focuses more on the subject matter that is present on a website which includes the video, audio, images and text. Usage mining focuses on a process where the servers report the aspects accessed by users through the server access logs. This data helps in creating an effective and an efficient website structure. Structure mining focuses on the nature of connection of the websites. This is effective in finding out the similarities between various websites.

Also known as web data_mining, with the aid of the tools and the techniques, one can predict the potential growth in a selective market regarding a specific product. Data gathering has never been so easy and one could make use of a variety of tools to gather data and that too in simpler methods. With the help of the data mining tools, screen scraping, web harvesting and web crawling have become very easy and requisite data can be put readily into a usable style and format. Gathering data from anywhere in the web has become as simple as saying 1-2-3. Internet data-mining tools therefore are effective predictors of the future trends that the business might take.

If you are interested to know something more on Web Data Mining and other details, you are welcome to the Screen Scraping Technology site.



Source: http://ezinearticles.com/?Internet-Data-Mining---How-Does-it-Help-Businesses?&id=3860679

Friday 6 September 2013

Data Mining Software - Discover Software Modernization

Data mining software is usually an application that one uses and covers mostly with one's knowledge in the discovery of software modernization. Mining data software involves the understanding of the software artifacts that exist and the mining data tools. This process has very close relations with reverse engineering. The knowledge that one gains from studying data software that exists is usually presented in forms of models and by doing these queries one can be in a position to make his personal data mining software. With the knowledge that someone gains it must be applicable and one must also know the mining data tools that are suppose to be used apart from the soft wares. One can be able to know very widely about the mining data tools that are there in mining data software by doing computer science as a course. Computer science covers widely on what are the procedures, steps of mining data software and how can use the mining data tools.

This software is mostly used in making of databases schemes. Making of databases is not as easy as many would think it requires one to have some knowledge about computer engineering and the basic concepts of computers.;This software is mostly used in data crawling because it can be in a position to store data and one can be able to retrieve the data when needed.

The softwares are not that cheap they come in different varieties and it will depend on which information or the database on which one is coming up with.

Data mining software are usually in different levels there is the data level, design level, application level, architectural level, call graph level and program level it will depend on which level one is covering and this come together with mining data tools.

Data software's have increased rapidly through the introduction of computers and ERP definition. Computers hackers have been able to get the softwares at a very low price and this has made data mining to become very easy and quick to use in the shops and supermarkets and also government institutions. One cannot do data crawling without having the basic knowledge about data mining soft wares because soft wares are the programmes that are usually installed into the computer and without the programmes then no data can be processed.

There are a lot of challenges that come with the use of the mining soft ware. One can easily crush the software he is using or the softwares can easily break they are normally sold on CDS one can easily break it or loose it.

High chances of losing the data that someone is coming up with is very high because computers easily crash due to some difficulties that they experience or a virus can easily crush the computer.

Mining software take a very large space and in most of the computers. The reason behind this is because, data crawling use graphics. Graphics usually occupy a lot of space in terms of the size of the local disk. One is suppose to look for a computer that has very good memory. Data crawling is something that needs to be updated each and every time something appears along the way.



Source: http://ezinearticles.com/?Data-Mining-Software---Discover-Software-Modernization&id=5054991

Thursday 5 September 2013

Advantages of Data Mining in Various Businesses

Data mining techniques have advantages for several types of businesses, as well as there are more to be discovered over time. Since the era of the computer, things have been changing pretty quickly and every new step in the technology is equivalent to a revolution. Communication itself has not been enough. As compared to the present times, the data analyzers in the past have not achieved the chance to go further with the data they have in hand. Today, this data isn't used for selling more of a product but to foresee future risks as well as prevent them.

All are benefiting from modern these techniques even from smaller to large enterprises. They can now predict the outcome of a particular marketing campaign by analyzing them. However, in order for these techniques to be successful, the data must be arranged accurately. If your data is disseminated, you need to bring it in a meeting and then feed into the systems for the algorithms to figure it out. To put it shortly, no matter how small or big your business might be you always need to have the right system when collecting data from your customers, transactions and all business activities.

Advantages of Data Mining For Businesses

Businesses can truly benefit from its latest techniques; however, in the future, data mining techniques are expected to be even more concise and effective than they are today. Here are the essential techniques that you need to understand:

· Big companies providing the free web based email services can use data mining techniques to catch spam emails from their customer's inboxes. Their software uses a technique to assess whether an email is a spam or not. These techniques are first tested and validated before they are finally used. This is to ensure they are producing the correct results.

· Large retail stores and even shopping malls could make use of these techniques by registering and recording the transactions made by their customers. When customers are buying particular sets of product, it can give them a good understanding of placing these items in the aisle. If they want to change the order and placement of the item on weekends, it could be found out after analyzing the data on their database.

· Companies manufacturing edible or drinkable products could easily use data mining techniques to increase their sales in a particular area and launch new products based on the information they've obtained. That's why the conventional statistical analysis is rigid in scenarios wherein consumer behavior is in question. However, these techniques still manages to give you good analysis for any situations.

· In call centers, the human interaction is at its peak because people are talking with another people at all times. Customers respond differently when they talk to a female representative as opposed to talking to a male representative. The response of customers to an infomercial is different from their response to an ad in the newspaper. Data could be used for the benefit of the business and is best understood with the use of data mining techniques.

· Data mining techniques are also being used in sports today for analyzing the performances of players in the field. Any game could be analyzed with the help of these techniques; even the behaviors of players could be changed on the field through this.

In short, data mining techniques are giving the organizations, enterprises and smaller businesses the power of focusing on their most productive areas. These techniques also allow stores and companies to innovate their current selling techniques by unveiling the hidden trends of their customer's behavior, background, price of the products, placement, closeness to the related products and many more.



Source: http://ezinearticles.com/?Advantages-of-Data-Mining-in-Various-Businesses&id=7568546

Monday 2 September 2013

Web Data Extraction Services and Data Collection Form Website Pages

For any business market research and surveys plays crucial role in strategic decision making. Web scrapping and data extraction techniques help you find relevant information and data for your business or personal use. Most of the time professionals manually copy-paste data from web pages or download a whole website resulting in waste of time and efforts.

Instead, consider using web scraping techniques that crawls through thousands of website pages to extract specific information and simultaneously save this information into a database, CSV file, XML file or any other custom format for future reference.

Examples of web data extraction process include:
• Spider a government portal, extracting names of citizens for a survey
• Crawl competitor websites for product pricing and feature data
• Use web scraping to download images from a stock photography site for website design

Automated Data Collection
Web scraping also allows you to monitor website data changes over stipulated period and collect these data on a scheduled basis automatically. Automated data collection helps you discover market trends, determine user behavior and predict how data will change in near future.

Examples of automated data collection include:
• Monitor price information for select stocks on hourly basis
• Collect mortgage rates from various financial firms on daily basis
• Check whether reports on constant basis as and when required

Using web data extraction services you can mine any data related to your business objective, download them into a spreadsheet so that they can be analyzed and compared with ease.

In this way you get accurate and quicker results saving hundreds of man-hours and money!

With web data extraction services you can easily fetch product pricing information, sales leads, mailing database, competitors data, profile data and many more on a consistent basis.




Source: http://ezinearticles.com/?Web-Data-Extraction-Services-and-Data-Collection-Form-Website-Pages&id=4860417

Data Management Services

In recent studies it has been revealed that any business activity has astonishing huge volumes of data, hence the ideas has to be organized well and can be easily gotten when need arises. Timely and accurate solutions are important in facilitating efficiency in any business activity. With the emerging professional outsourcing and data organizing companies nowadays many services are offered that matches the various kinds of managing the data collected and various business activities. This article looks at some of the benefits that accrue of offered by the professional data mining companies.

Entering of data

These kinds of services are quite significant since they help in converting the data that is needed in high ideal and format that is digitized. In internet some of this data can found that is original and handwritten. In printed paper documents and or text are not likely to contain electronic or needed formats. The best example in this context is books that need to be converted to e-books. In insurance companies they also depend on this process in processing the claims of insurance and at the same time apply to the law firms that offer support to analyze and process legal documents.

EDC

That is referred to as electronic data. This method is mostly used by clinical researchers and other related organization in medical. The electronic data and capture methods are used in the utilization in managing trials and research. The data mining and data management services are given in upcoming databases for studies. The ideas contained can easily be captured, other services being done and the survey taken.

Data changing

This is the process of converting data found in one format to another. Data extraction process often involves mining data from an existing system, formatting it, cleansing it and can be installed to enhance both availability and retrieving of information easily. Extensive testing and application are the requirements of this process. The service offered by data mining companies includes SGML conversion, XML conversion, CAD conversion, HTML conversion, image conversion.

Managing data service

In this service it involves the conversion of documents. It is where one character of a text may need to be converted to another. If we take an example it is easy to change image, video or audio file formats to other applications of the software that can be played or displayed. In indexing and scanning is where the services are mostly offered.

Data extraction and cleansing

Significant information and sequences from huge databases and websites extraction firms use this kind of service. The data harvested is supposed to be in a productive way and should be cleansed to increase the quality. Both manual and automated data cleansing services are offered by data mining organizations. This helps to ensure that there is accuracy, completeness and integrity of data. Also we keep in mind that data mining is never enough.

Web scraping, data extraction services, web extraction, imaging, catalog conversion, web data mining and others are the other management services offered by data mining organization. If your business organization needs such services here is one that can be of great significance that is web scraping and data mining



Source: http://ezinearticles.com/?Data-Management-Services&id=7131758