Sunday, November 12, 2017

Stop Machine Learning and Start Machine Studying

Machine Learning at Present

In the field of AI (Artificial Intelligence) Machine Learning is the key concept used for achieving intelligent systems. In the machine learning, a learning model is developed and it is trained using a sample data set to create the intelligence. The performance of the system is dependent on the elegance of the design and the amount and relatedness of the data set. The model is designed to capture the existing knowledge specialized into the domain which facilitates fast learning and improved accuracy. Specialized knowledge is required to design good learning modals and in today, the focus is completely given on designing better modals to come up with better AI systems. So there are more and more papers written on new modals expecting with better AI performance. But unfortunately, still there are less significant Machine Learning systems designed by small startups other than by a data-rich technology giants like Google or Microsoft.

Limitation of Data Driven Learning

The real advantage with in machine learning for technology giants is the amount of available data. Google have almost all the information the general public knows in their data centers. Facebook has data about us more than we know about ourselves (due to forgetting). An ordinary organization or even a university cannot afford that much of data at all. On the other hand it takes a lot of computational resources (CPU power and time) to train a better machine learning system due to the following reasons.

  1. Scale of data used for training is large
  2. Learning rate slows down in most machine learning systems with amount of learning
It seems an ordinary technology company cannot afford the capabilities of AI to a giant company with such a higher volume of resources. But if we go back to humans, where we are imitating the intelligence to our mechanical systems, we see something different. We learn a lot of stuff by our own even in absence of such a high volume of data or with higher energy consumption.

How do Humans Learn?

When it comes to humans, we have a learning system of neural networks similar to Artificial Neural Networks (ANN). But it has a difference. We learn the reliable information sources (first source is mother then father, relatives, teachers, friends, books, Internet and etc.) first and then get the wisdom directly from these sources. That is also a recursive process. We first identify who we can trust and believe in them. Then we change our believes according to their inputs if the new believes are not largely contradicting with our existing belief system. In that process we gather other reliable sources and get wisdom directly from them as well in the same process. For example we starts to believe mother and then we believe that father is also reliable to believe and starts to believe what the father says. Another example is that we believe school teachers and read their recommended books and believe what the book says about reality. We start to evaluate the validity of a knowledge by evaluating the knowledge itself or the source of knowledge, only when that piece of knowledge is not contradicting with the existing knowledge. For example when a child reads the benefits of capitalism, who was living in a socialistic society, will try to evaluate the reliability of the new knowledge of capitalism versus the existing knowledge of socialism.

In this way humans gather the wisdom gathered by other people for a long term process of learning and studying, by simply believing on the information source. In reality we purely learn a very little by ourselves compared to the amount we learn by studying other information sources. That made it possible us to know about very risky and time consuming experiences like death and aging.

How Machines Can Learn?

Similar to the way we learn by first learning on the reliable information sources, machines can be modeled to identify reliable information sources by conventional machine learning. Then machine itself can refer the information from the source and start to change the behavior according to the information. That is a process of converting the information obtained from the reliable source into meta information of the learning modal. This process can be recursively executed and the system can learn a lot of knowledge within a very little amount of learning. That is pure studying. But how the machines can study like humans?

Read Like Humans

The main source of knowledge of humankind is already stored in form of natural language in books and in online content. Machines can first study what humans have learned up to now in the history by reading the text contents in natural languages.

Source: http://rtechnews.com/tech-science/new-software-makes-use-of-machine-studying-to-personalize-emails-3479

Role of NLP

But the problem is that machines are not capable of reading human languages to learn from books. That is the situation when Natural Language Processing (NLP) comes in to play. Machines can use the existing NLP modals to extract information from as logical information into the system. The remaining work is how the logical information gathered can be converted into the meta information of learning modal and run the system in a controlled scope of logical learning and decision making. Existing modals to evaluate source credibility of information can be re-used to identify the reliable knowledge sources and natural language translation technologies can be further used to enhance the scope of knowledge available to learn throughout the world.

Wednesday, November 8, 2017

Is AI Evil?

There is a heavy debate among technology giants whether the AI can become a threat to the existence of mankind when it becomes an Artificial Super Intelligence. (ASI) But the problem with this prediction is not knowing how the logic of a super intelligence would reason facts. Even humans cannot understand how we do reasoning in most cases. But when it comes to a super intelligence that is 1000000 or more times intelligent than humans how can we predict what would be their decision?

The problem is how a rational thinker would decide whether the humans would exist in this world or not. One argument is that humans are like a virus to the natural world (said by the agent to Morpheus  in movie, Matrix) and they should be eliminated. And then the question is whether the activity of humans cannot be considered as a natural process and tolerate it. Other argument is that the ultimate wisdom is thinking with heart and be friendly with humans. But then the question is why only the kindness should only be considered on humans but not on other living beings in the world. Humans are famous in killing and suppressing the other living beings on earth.

None of the above arguments can rationalize the value of existence of human beings nor it can rationalize the elimination of humans from earth. Then how can we find whether the AI could be evil or not?

First we can assume the AI is a mimicking technology of natural human way of thinking. Let's check whether that evil nature can be expected in humans. In real humans evilness is clearly visible. But it is not possible to become a threat to our existence. One reason is that the scope of power of a human is limited so that he cannot directly use a mass destruction weapon or similar method to kill other people. Others would stop him if that type of behavior is seen from a human being. But the ASI is so intelligent so it can tempt the humans well as it can think many steps ahead the human thinking. But anyway still there is a very small probability of a human being becoming a person with an intention to kill other humans. But why?

Human thinking and value system is programmed according to the genetic algorithm to preserve the genes of themselves. Even the wish of you and me to protect the human beings is a result of that bias. That is not the only bias humans have. Humans and other animals have many common biases like desire to food and sex and fear at destruction. All of them constitutes the basic vision of a human or an animal. We want to survive, protect our species and work for the well being of the human society. Our thinking is driven on the goals on achieving these goals. If AI is developed so that it has the same goals like us it will process information for the well being of humans. That is the simple answer.

As most modern AIs are based on Artificial Neural Networks (ANN), if they are originally developed with a similar neuronal architecture to the real human beings that embeds the evolutionary goals of human beings AIs will start to have a sense similar to humans. But remember, according to us, we and our species are the ones that should survive. If that bias is embedded into ANNs, it will also start to feel the existence of self and will start to protect their species. So before we mimic our neuronal architecture to ANNs we should identify the connections related to self and replace them with humans where the ANN should not have a self but instead humans replacing them. If the self is not replaced correctly with humans, it will correct the replacements we made by itself and become a much selfish personality which ultimately treats humans, like we treat cows and chicken.

Then the question is what if we would not mimic the neural networks of humans. Yes, then there would be no issue like that depending on the goals given to the system. One goal should be always be the goal of protecting the human species, human laws and human traditions. If the goal was something like building chairs (without the goals related to protecting humans) it would use all the possible ways to achieve the target. It will start to kill humans to get their lands to plant trees to get wood. Finally it will destroy all the humans in the world to make most number of chairs. Now you see the challenge. All the actions of AI will dependent on the basic goals of the AI. That is similar to the attitudes of human beings. Parents and adults plant attitudes in a child's mind that are good for the existence of consistence of the society. Actually that is only a part of it. Child's brain is automatically programmed to a certain extend to be aligned with these attitudes. Antisocial criminal children or people are killed by the society which would evolve the humans to maintain only the best attitudes in a society to exist. The same can be applied to ANNs. A set of ANNs themselves can be given a virtual society with agents representing human beings. When their attitudes are against the well beings of humans those ANNs should be eliminated. Running that process would select only the ANNs that has best suited to our human society. That is the time they should be taken outside from the virtual world and be used in the real world. And employing several such ANNs would protect us if one of them goes against us.

Thursday, December 31, 2015

Bye to 2015

After less than 2 hours there will be a new calendar year, 2016 be started which will be a beginning of this blog in a new way with different type of content. Stay tuned for a new type of blogging culture. :)

Sunday, March 2, 2014

ESB Performance Round 7.5

WSO2 has carried out a performance testing of latest ESB release, WSO2 ESB 4.8.1. In WSO2 ESB Performance Round 7.5 it compares WSO2 ESB 4.8.1 with other competitive ESB products, Mule ESB 3.4.0, Talend-SE 5.3.1 and UltraESB 2.0.0.

Basic observations are as follows.



Tuesday, July 16, 2013

How to Build WSO2 Code

Although WSO2 is open source many people were having problem with checking out the WSO2 source code and building WSO2 products. Here are the simple steps to do it.
Note that at the moment ongoing development happens inside the trunk with version 4.2.0 SNAPSHOT. Last released WSO2 code is located in branch version 4.1.0 with Carbon platform version 4.1.2.

Build the trunk


  1. Checkout Orbit from https://svn.wso2.org/repos/wso2/carbon/orbit/trunk/
  2. Checkout Kernel from https://svn.wso2.org/repos/wso2/carbon/kernel/trunk/
  3. Checkout Platform from https://svn.wso2.org/repos/wso2/carbon/platform/trunk/
  4. Install Apache Maven 3 in your computer.
  5. Go to the checked out directories and build with Maven of orbit, kernel and platform code respectively. (Use command, mvn clean install)
  6. If any errors comes with tests use the command mvn clean install -Dmaven.test.skip=true
  7. If the build is properly building and you are fortunate you will get all the products as zip files in each product. For example WSO2 BAM will be there in platform/trunk/products/bam/modules/distribution/target directory.
  8. But most probably you will not be able to build all the products well at the same time and it will take much time as well. So you can build the product only you want as follows. Comment the products module in platform/trunk/pom.xml . Then after building all three orbit, kernel and platform, you can manually build the product/s you want. For example if you only want to build WSO2 BAM, go to platform/trunk/products/bam and build with command mvn clean install -Dmaven.test.skip=true

Build the Branch 4.1.0

  1. Checkout orbit from https://svn.wso2.org/repos/wso2/carbon/orbit/branches/4.1.0/
  2. Checkout kernel from https://svn.wso2.org/repos/wso2/carbon/kernel/branches/4.1.0/
  3. Checkout platform from https://svn.wso2.org/repos/wso2/carbon/platform/branches/4.1.0/
  4. Follow the exact steps 4, 5 and 6 mentioned under the topic, "Build the trunk".
  5. In 7th step, use the directory path, branches/4.1.0/products/bam/2.3.0/modules/distribution/target as the BAM pack location.
  6. Then follow the 8th step by commenting the product module in branches/4.1.0/pom.xml and building the BAM in location, branches/4.1.0/products/bam/2.3.0

Build a Tag

When already there is a released product, best way to build it, is by checking out the tag of the released version. The reason is that even the branch may be committed after the release by a mistake.
For this example lets continue with WSO2 BAM 2.3.0. It has orbit version 4.1.0, kernel version 4.1.0 and platform version 4.1.2. You can checkout these three from https://svn.wso2.org/repos/wso2/carbon/orbit/tags/4.1.0/ , https://svn.wso2.org/repos/wso2/carbon/kernel/tags/4.1.0/ and https://svn.wso2.org/repos/wso2/carbon/platform/tags/4.1.2/ . Then continue building in the same way as earlier.



Monday, January 7, 2013

Writing a Custom Mediator for WSO2 ESB - Part 3

This is the last part of blog post series about creating a WSO2 ESB mediator. Older parts are,
  1. Part 1
  2. Part 2
In this post I will explain the UI component of a ESB mediator using the BAM mediator (Carbon version 4.0.5). UI component (i.e.: org.wso2.carbon.mediator.bam.ui) is responsible for the BAM mediator specific UIs in the following UI.

Mediator UI



When BAM mediator is selected in the above mediator sequence (there is only the BAM mediator is available here anyway) the UI located under the sequence UI, is specified in the mediator UI component. Anyway not all the UI under it comes from the UI component. UI between the bar named Mediator and Update button comes from the UI component. Actually this is the edit-mediator.jsp JSP located in the resources package in org.wso2.carbon.mediator.bam.ui component.
After the changes are made on UI the user can click on the above mentioned Update button. This event will call the update-mediator.jsp JSP adjacent to the edit-mediator.jsp JSP.

When switch to source view is clicked the following source appears.


And you can return back to design view by clicking on switch to design view link. This toggling mechanism need the implementation of UI component. In simple terms this functionality need the implementation of,
  1. BamMediator UI class - similar to BamMediator class in backend component
  2. serialize method - similar to serializeSpecificMediator method in BamMediatorSerializer class in backend component
  3. build method - similar to createSpecificMediator method in BamMediatorFactory class in backend component

Abstract Mediator Class (UI)


The difference of UI component with backend component is, that both serialize method and build method are included in the BamMediator UI class. BamMediator UI class can be found in org.wso2.carbon.mediator.bam.ui package.
BamMediator class should inherit from org.wso2.carbon.mediator.service.ui.AbstractMediator.
And also it should implement getTagLocalName method, similar to getTagQName used in backend. And also the serialize and build methods as mentioned earlier should be implemented.

public class BamMediator extends AbstractMediator {

    public String getTagLocalName() {

    }

    public OMElement serialize(OMElement parent) {

    }

    public void build(OMElement omElement) {

    }

}

Abstract Mediator Service Class


Every mediator UI component should consists of a Mediator Service class. In this example BamMediatorService is the class which implements the required settings of the UI. Let's explain with the example.

public class BamMediatorService extends AbstractMediatorService {

    public String getTagLocalName() {
        return "bam";
    }

    public String getDisplayName() {
        return "BAM";
    }

    public String getLogicalName() {
        return "BamMediator";
    }

    public String getGroupName() {
        return "Agent";
    }

    public Mediator getMediator() {
        return new BamMediator();
    }

}

As the example says every Mediator Service should inherit the org.wso2.carbon.mediator.service.AbstractMediatorService class.
Note how the name BAM is used as the sequence editor under the sub menu item named Agent. See how the getMediator method is used to execute the BamMediator UI class which we have discussed earlier.

Bundle Activator Class


Unlike other Carbon bundles where Bundle Activator is defined in the backend bundle, in a mediator class, the Bundle Activator is defined in the  UI bundle. In this example it is BamMediatorActivator class.

Basically the Bundle Activator should inherit the org.osgi.framework.BundleActivator class and should implement start and stop methods. For further information read this article.

Properties props = new Properties();

bundleContext.registerService(MediatorService.class.getName(), new BamMediatorService(), props);

Note how the BamMediatorService class is used.

Congratulations, you have finished the post series of how to write a custom mediator with WSO2 ESB. If you could not understand this well, most probably that is because you are not familiar with WSO2 Carbon platform. If you search more you will be able to catch them.

Friday, January 4, 2013

Writing a Custom Mediator for WSO2 ESB - Part 2

Let's continue from the previous post, Part 1. As there were several changes and fixes happened to BAM mediator, there we are taking the example of two latest components,
  1. org.wso2.carbon.mediator.bam version 4.0.5 - backend component
  2. org.wso2.carbon.mediator.bam.ui version 4.0.5 - UI component
As you can see, there is no services.xml file exists in the backend component. There are two files namely org.apache.synapse.config.xml.MediatorFactory and org.apache.synapse.config.xml.MediatorSerializer containing the class names (with package name) of Mediator Factory and Mediator Serializer. Let's discuss the usage of these 2 classes.

Mediator Factory


In WSO2 ESB, each mediator is created using the Factory design pattern. When the ESB starts each mediator is created using a Mediator Factory. The programmer is given the opportunity to write the Mediator Factory as a single class. In this example, the factory class is org.wso2.carbon.mediator.bam.xml.BamMediatorFactory that contains all the instantiating code relevant to the mediator. Factory class is the code that generates the mediator based on the mediator XML (XML specification of the mediator in the ESB sequence). In this factory the configuration information should be extracted from the XML and should create a mediator based on that configuration.

public Mediator createSpecificMediator(OMElement omElement, Properties properties) {

}

This method should be implemented which takes the XML as an OMElement and returns the Mediator to be produced. Here it is an instance of the BamMediator class we have defined in the parent package.
And also in this method it can access the secondary storage (e.g.: Registry) as the method is not performance critical. (This method will run only at the creation stage of a mediator.)

public QName getTagQName() {

}

This method should also be implemented to return the QName of the XML of the specific mediator. In BAM mediator it has the name "bam".

Mediator Serializer

Mediator Serializer does the reverse of the Mediator Factory. It creates the XML, related to the mediator, back from the Mediator class. (Here it is from BamMediator)

public OMElement serializeSpecificMediator(Mediator mediator) {

}

This method should implement which does the above said conversion. (serialization) It takes the Mediator and returns the generated XML, related to the mediator.

public String getMediatorClassName() {

}

And also this should be implemented to return the Mediator's class name.

Now let's start discuss about the most important class, the Mediator class, here the BamMediator class.

Mediator Class

This is the class used while the ESB is running for the purpose of mediation. As this class is executed in run time it should be designed with care avoiding unnecessary performance degrading actions. And also because this class is executed in parallel threads, should be careful on concurrency issues and make them thread safe.
Mediator class should always extend the AbstractMediator class.

public boolean isContentAware() {
    return true;
}


The above method must be included in the Mediator class if the mediator is intended to interact with the MessageContext.
mediate is the most important method that should be implemented.


public boolean mediate(MessageContext messageContext) {

}

mediate method is given the MessageContext of the message, which is unique to an each request passing through the mediation sequence. The return boolean value should be true if the mediator was successfully executed and false if not.

Note that global variables in the Mediator class may cause race condition as different threads of the mediation sequence may access the same global variable. It can be prevented by one of the following techniques.
  1. Using local variables inside the method
  2. Storing variables in the MessageContext as Properties
  3. Using thread local variables
Best way to handle this issue is possible with first technique if the mediator is not that complex. And the third technique should be used with care if want to use due to the risk of memory leaks.

Let's discuss about the UI package in the next part (Part 3).