Sunday, 12 January 2014

cFosSpeed

cFosSpeed - The Internet Accelerator 

cFosSpeed increases your throughput and reduces your Ping. 
cFosSpeed is a software solution for traffic shaping for the Windows operating system. It improves Internet latency while maintaining high transfer rates. 
The program attaches itself as a device driver to the Windows network stack.
cFosSpeed  advantages are
  • It is keeping your internet fast during heavy upload/download.
  • It improves your Ping for online game.
  • It reduces audio/video streaming problems.

 To know more about how to use it click here.
 
References
http://www.cfos.de/en/cfosspeed/cfosspeed.htm
http://en.wikipedia.org/wiki/CFosSpeed

Thursday, 9 January 2014

Graph Database

What is the Graph?

A Graph is just a collection of vertices and edges .

A property graph has the following characteristics:
  • It contains nodes and relationships.
  • Nodes contain properties (key-value pairs).
  • Relationships are named and directed, and always have a start and end node.
  • Relationships can also contain properties.

 

What is the Graph Database? 

Graph Database [1] is a type of NoSQL database that uses graph theory (which is the study of points and lines) to store, map and query relationships Figure1. It is also called a graph-oriented database.

Figure1 social graph adapted from [7]


A graph database management system is an online database management system with Create, Read, Update, and Delete (CRUD) methods that expose a graph data model.

Graph databases are generally built for use with transactional (OLTP) systems.

A graph database is essentially a collection of nodes and edges. Each node represents an entity (such as a person or business) and each edge represents a connection or relationship between two nodes.

Every node in a graph database is defined by a unique identifier, a set of outgoing edges and/or incoming edges and a set of properties expressed as key/value pairs.

Each edge is defined by a unique identifier, a starting-place and/or ending-place node and a set of properties. 

Graph databases are well-suited for analyzing interconnections, which is why there has been a lot of interest in using graph databases to mine data from social media.

Graph databases are also useful for working with data in business disciplines that involve complex relationships and dynamic schema, such as supply chain management, identifying the source of an IP telephony issue and creating "customers who bought this also looked at..." recommendations.

What are the Graph Database types?

Google has its own graph computing system called Pregel (you can find the paper here), but there are several commercial and open source graph databases available. Let's look at a few.

Neo4j [4,3]
  • This is one of the most popular databases in the category, and one of the only open source options.
  • It's Java based but has bindings for other languages, including Ruby and Python.
  • It's the product of the company Neo Technologies, which recently moved the community edition of Neo4j from the AGPL license to the GPL license.
FlockDB [3]
  • FlockDB was created and open-sourced by Twitter.
  • It is a real-time, distributed database
  • Twitter's Kevin Weil talked about the creation of the database, along with Twitter's use of other NoSQL databses, at Strange Loop last year.
AllegroGraph [3,5]
  • It is a graph database built around the W3C spec for the Resource Description Framework.
  • It's designed for handling Linked Data and the Semantic Web.
  • It supports SPARQL, RDFS++, and Prolog.
  • It is a proprietary product of Franz Inc., which markets a number of Semantic Web products - including its flagship set of LISP-based development tools.
  • It uses efficient memory utilization in combination with disk-based storage, enabling it to scale to billions of quads while maintaining superior performance.

GraphDB [6,3]
  • It is graph database built in .NET by the German company sones in Erfurt and Leipzig..
  • It's available as a cloud-service through Amazon S3 or Microsoft Azure.
  • It is a simple node.js package designed to ease the process of working with graph databases
  • It provides high level graph operations (create node, link, etc) that are generally common amongst all graph databases.
  • It uses connector packages to implement store specific serialization.
InfiniteGraph [2,3]
  • It is a distributed graph database implemented in Java.
  • Its goal is to create a graph database with "virtually unlimited scalability.
  • It is produced by Objectivity, a company that develops data technologies supporting large-scale, distributed data management, object persistence and relationship analytic.


What are the advantages of the Graph Database?

  • It offers an extremely flexible data model.
  • The performance tends to remain relatively constant, even as the dataset grows. This is because queries are localized to a portion of the graph.
  • The execution time for each query is proportional only to the size of the part of the graph traversed to satisfy that query, rather than the size of the overall graph.
  • It expresses and accommodates business needs in a way that enables IT to move at the speed of business.
  • You can add new kinds of relationships, new nodes, and new sub-graphs to an existing structure without disturbing existing queries and application functionality.
  • The schema-free nature of the graph data model, coupled with the testable nature of a graph database’s application programming interface (API) and query language, empower us to evolve an application in a controlled manner.
However, most NOSQL databases whether key-value-, document-, or column-oriented—store
sets of disconnected documents/values/columns. This makes it difficult to use them for connected data and graphs. One well-known strategy for adding relationships to such stores is to embed an aggregate’s identifier inside the field belonging to another aggregate—effectively introducing foreign keys. But this requires joining aggregates at the application level, which quickly becomes prohibitively expensive [7].


References
  1. http://whatis.techtarget.com/definition/
  2. http://en.wikipedia.org/wiki/InfiniteGraph
  3. http://readwrite.com/
  4. http://www.neotechnology.com/
  5. http://www.franz.com/agraph/allegrograph/
  6. https://npmjs.org/package/graphdb
  7. http://graphdatabases.com/


Friday, 3 January 2014

Save and Fill out by using autofill field

The autofill attribute allows web browsers to fill out fields automatically. The attribute is commonly used on username and password fields to allow or prevent autocomplete from working.

The attribute itself is Boolean, which means that it is either on or off. The default status for this attribute is on.

To on or off this feature, do the following:
  1. Open Firefox.
  2. Type in the address bar about:config.
  3. Click on I'll be careful, I promise.
  4. Search for signon.autofillForms.
  5. Double click on it to change its status from off to on (If it is not already on).



Also you can do the following way to do the previous steps:
Firefox > Options > Options > Security: Passwords: "Saved Passwords" > "Show Passwords"






By doing these steps passwords, can be saved by the web browser directly.

Thursday, 2 January 2014

What are useful resources for a newcomer to data analysis techniques?

This article are taken from http://searchcloudcomputing.techtarget.com/answer/
It is written by Dan Sullivan
---------------------------------------------------------------------------------------------------

The useful resources for a newcomer to data analysis techniques:

 

Many organizations are adept at collecting data, but the real value is only realized when the data is analyzed. Creating and maintaining a data analysis practice will require support from cloud administrators, as well as data analysts. Cloud administrators will be called on to configure systems, evaluate architectures and maintain infrastructure for data analysts. The more you know about the practice of data analysis, the better you can support it.

Using a combination of books and online tutorials while working with various tools can help you dive into data analysis while staying linked to your own real-world data analysis problems.

Many data analysis techniques are taken from statistics and machine learning. Cousera.org, the free resource for massive online courses, offers courses in computing for data analysis, mathematical modeling and statistics. Andrew Ng's course on machine learning at Cousera is well designed for students new to the topic.

Philipp Janert's book Data Analysis with Open Source Tools introduces statistical techniques along with open source tools. Wes McKinney's Python for Data Analysis: Agile Tools for Real-World Data is a good introduction to working with data in Python.

R is a widely used open source statistical analysis tool with a wide set of add-on packages. The R Tutorial is a gentle introduction to R, but it has some more advanced articles as well. The Pandas Python package has features comparable to R, and it is a good fit for Python developers that want to use Python for collecting, formatting and analyzing data.

Getting started with data-mining tools does not have to be intimidating. RapidMiner is an open source data-mining tool with an easy-to-use interface and a wide collection of research tools available.

Visualization tools such as Tableau Software, a visualization service, can help you better understand large data sets with many variables. This is a fee-for-service product, but there is a free trial if you want to give it a try.

Sunday, 8 December 2013

"'Fog Computing'" Does the Cloud Computing under risk?

Fog Computing Is A New Concept Of Data Distribution?

For the last couple of years there have hardly been any new concepts in cloud computing. But hardly doesn’t mean never. Since the summer of 2013 you might have noticed the appearance of a new trend that has been called Fog Computing.

The main purpose of fog computing is to gather services, applications, large data volumes in one place and unite them with the networks of a new generation.

The aim is to offer data, computing power, memory and services at a really distributed level. The thing is that today’s data is extremely dispersed and delivered continuously, in large volumes and to a large number of users with different devices. In order to make an effective cloud model, businesses need to learn how to deliver the content to their users via geographically distributed platforms and not via the cloud which is placed in one place.

The idea of fog computing is to distribute all data and place it closer to a user, remove network delays and other possible obstacles connected with data transfer. Users need all their data and apps at any time in any place – this is the essence of cloud services. And fog computing may take this service to the next level. The concept of this abstract fog is based on the concept of a drop (here we see why the concept has such a name). A drop is a chip of a microcontroller with built-in memory and data transfer interface, which is combined with wireless connection Mesh chip. Such a drop works on a small battery which is enough for a couple of years. You can connect different temperature, light, voltage sensors, etc. This drop is a basic technology for fog computing. With the help of such mini-chips it is possible to create really distributed network of data or devices all around the planet.


what are the possible advantages of using fog computing?

Firstly, with the help of fog computing you can place data closer to the final user, even geographically. Constant data circulation in the world makes providers to create new technologies for their local storage and caching. The drops allow keeping the data close to a user instead of storing them in a data-center somewhere far. It helps to escape possible delays in data transfer.
Fog computing also helps to provide quality service for mobile users. The administrator of your infrastructure gets access to such data as from where and how your users get information, and how fast the process is. It helps to improve the interaction with clients, and make this interaction safer. By controlling data in all node points, fog computing allows turning your data center into a distributed cloud platform for users.

Secondly, this concept is not something to be developed in the future. It is already here. A lot of companies already introduce fog computing, while other companies are ready for it. Actually, any company which delivers content can start using fog computing.
What is important is that fog computing is not a replacement for cloud computing. It’s an addition which develops the concept of cloud services. With “drops”, data became isolated in the cloud systems and close to users.


The Article Taken from:


Read More about Fog Computing:

Wednesday, 13 November 2013

Fast web browser ever

CometBird is a Web browser. It is secure, speedy, and totally free. It is effective in both performance and privacy protection.

Main features
The bookmarks auto-synchronizer enables you to use same bookmarks collection anytime and anywhere.

Thousands of customizing options are available for personalizing your own CometBird.

It's security and privacy protection techniques will create a clean browsering environment for the user.

It has a password manager and smart address to facilitate user browsing and logging in.

Try it

Wednesday, 9 October 2013

My Mouse disconnect and then reconnecting randomly


If your mouse connect and reconnect randomly make sure to do the following:

First: put your mouse USB in another USB.

Second: make sure that your mouse is clean from the bottom.

Third: go to control panel> System and Security then in the left side choose Hardware and Sound and under Devices and Printers choose Device Manager under Universal Serial Bus Controller double click the first USB Root Hub and choose the Power management and unchecked the "Allow The Computer to turn off this device to save power".

Uncheck the allow the computer to turn off this device to save power
Fourth: update your driver as follow:

Update the drivers