SharePoint Back-Propagation Neural Network Problem

Yeah, I know what you are thinking, but I’m not full of shit, and I know often times I bring SharePoint to probably levels it shouldn’t be taken to, but whatever. It’s actually a side project I am working on that is looking to aggregate several sets of data into a forecasting model type environment since SharePoint lends itself pretty well to the data aggregation part, and partially well to the data mining part, well, I mean it at least it kinda of exposes the required objects through the API that would otherwise be required to do it.

Ok, so for people that haven’t worked with AI before, the highest level introduction possible…

So there are basically two types of artificial intelligence, you have weak artificial intelligence, and you have strong artificial intelligence. Weak artificial intelligence doesn’t really have the capability to evolve that well, so it can be argued whether it really qualified as AI at all. It doesn’t really constitute the presence of a pattern that mimics human behavior and the concept of evolved choice, but more relies on the clever programming and raw computing power to represent behavior that may be considered to be “human”.

On the other hand, there also exists the concept of strong artificial intelligence, which is a lot different, since it implies that the behavior, and choice patterns, of humans can logically be represented. So, in essence, your patterned programming is instead representative of the human mind. I haven’t really seen anything in application that has done this, but in theory this is what an expert system that targeted a business application should adhere to, something like SharePoint, however weak AI might be a stepping stone into such arguments.

Regardless, if SharePoint, as a primary business application platform, were to be used coupled with an AI system, it would be composed / could use / whatever of three main concepts:

Expert Systems

Neural Networks (or Artificial Neural Networks [ANN])

Evolutionary Algorithms

OK, so there are several parts and concepts that make it up, The problem that I was running into was building a Back-Propagation Neural Network, however if I could get the rudimentary concept to work I plan on extending it to hopefully work with Dynamic link matching (Neuronal Modeling), which is my real interest. What’s this? Well, I am not very adapt at its concepts, but have studied it for a wee bit, and it basically is how one could theoretically use pre-defined neural systems for the recognition of external objects, which is neato cheato.

Dynamic link matching is one of the most robust mechanisms known mostly in the realm for physical pattern recognitions (or, in a broader sense, translation invariant object recognition) as it doesn’t have leave much error that is left for distortion (which generally occurs because expressions change so much during the templating process [also known as topographic mapping] and depth skews) of the inputted objects. Dynamic link matching is heavily dependent on the concept of wavlets, Gabor wavelet transform more specifically (which are responsible for grey-value distributions). The most notable thing about DLM is its low level of error rate, because it compensates well for depth and deformation within the template scan.

After the template scan has occurred, the fun stuff appears to start happening.

You can generally see something like a humanface (represented by the circular object) several little dotted nodes across it (for which the plane the image is mapped on is a neural sheets of hypercolumns), which is representative of a neuron, which, going back to the wavlet talks, also has an associated jet value, which is orchestrates the grey-value distribution.

When the actual matching is performed of the inputted object against the stored template, it leverages network self-organization. I will talk about this maybe in a later post, because there has been no posting of my problematic code yet which is starting to annoy me.

Anyhoo, I don’t remember what I was writing about now. Oh yeah, Back-Propagation. So I was working on that for a client, and my god, what a pain in the butt getting some of it to work with SharePoint was. My main problem was getting the god damn weights to update correctly. What I finally settled on was this:

[csharp]

private readonly Unit[][] neuralNet;

public double[] neuralData;

public static double PrimaryDerivationOfActivation(double Argument)
{
return (Argument * (1 – Argument));
}

protected void UpdateWeights(double learningCount, double influence, double decayRate)
{
for (int i = 1; i < neuralNet.Length; i++) { for (int j = 0; j < neuralNet[i].Length; j++) { Unit unit = neuralNet[i][j]; foreach (Link link in unit.InputLinks) { double lr = (((learningCount * link.Source.GetOutput()) * unit.neuralData[0]) * PrimaryDerivationOfActivation(unit.GetOutput())) + (influence * unit.neuralData[1]); unit.neuralData[1] = lr; link.Weight = (link.Weight + lr) - (decayRate * link.Weight); } } } } [/csharp] Whew, I am glad I finally got the mother to work. Anyways, I will hopefully be releasing the forecasting system if the client is hip to it, and hopefully an API that allows other developers to extend other AI applications into SharePoint in order to maybe build other applications. Or I may be the only person interested in it. Meh. :)

Share

Formation and Elicitation of Knowledge Management

Formation and Elicitation of Knowledge Management

KM also known as knowledge management from processing perspectives is troubled with the dissemination, creation and usage of knowledge within the company. A well-structured process is in demand for placing for managerial knowledge to be successful. The processes could be separated into steps.

Starting with knowledge creations or elicitation: following its capture or storage, then transferring or disseminations: Lastly, its exploitations…we now comprehend the various stages of the processing.

Reaction of Creation and Elicitation

Knowledge requires creation and solicitation to produce reactions to something or else respond to stimulus of some sort. Thus, knowledge requires creation and solicitation from resources in order to function as inputs to the knowledge administration processing.

The first scenario where knowledge is required to create, we start with the roots, which is data. Relating data requires gathering from various basis, including sales, billing, transaction and systems collection. At the time applicable data is gathered, the data requires processing to produce meaningful information. The transaction systems processing takes care of the task in many businesses today. Similar to data information coming from various sources, likewise requires gathering of data.

One of the important considered aspects of the technique is vigilance of the information, since it can come from external starting place as well as internal sources. Industrial publications, as well as Government sources conduct market surveys to move regulations and laws, etc. These sources make up the external starting places. Information gathered requires integration. When all details of the necessary information is gathered and at our disposal, we start analyzing its pattern, trends, and associations—generating knowledge as a result.

Tasks of the knowledge creation could be delegated or devoted to personnel, including marketing financial analysts. Alternatively, it could employ artificial intelligence-base computerized techniques for tasking genetic algorithms, intelligent agents, and artificial neural networks.

Data Mining and knowledge discovery in databases (KDD) come in union with the processing of extracting validity, previously unfamiliar and possibly practical in patterns and information from raw data in large databases. Analogy of data mining suggests sifting through huge amounts of low-grade ore or data, to find something of value. The process is multi-stepping and iterative inductive processing. The processes include tasks including, data extraction, problem analyzing, data preparations, cleaning, data reduction, output analyzing, reviewing, and rule development. For the reason that data mining involves retrospective analyzing of data, experimental designs outside of the scope of data mining is out of complex. Data mining and KDD generally is minister to synonyms referring to the entire processing in evolving from data to knowledge. The goal of data mining is to extract pertinent information from the data with ultimate goals of discovering knowledge.

Knowledge Managing Processing

Knowledge as well resides in the mind of employees in form of know-how. Knowledge within the residency of the human mind however is often in unspoken form. To ready for sharing of information across an org, the knowledge requires transferring to explicit formatting.

Inviting organizational atmospheres is the center for knowledge soliciting. Sharing of ‘know-how’ with colleagues, is essential, especially without fear of personal value losses, and near to the ground occupational security issues. KM-Knowledge Management is all about sharing. Personnel at workplaces are likely to communicate freely with reservation in informal atmospheres, speaking with peers rather than mandated managers.

Capturing and Storage

In order to enable storage and distribution, knowledge gathering must adhere to codifications in machine-readable formats. Codification of knowledge leads to transferring of explicit knowledge in paper form reporting or manuals in electronic documentation, and in unspoken forms foremost, and then symbolized in electronic form. The documents require search capabilities in order to facilitate ease of knowledge retrieval. Codification techniques based on notions, lead us to believe knowledge can in fact be codified, stored and recycled at a later time. The conclusion is that the knowledge is extracted from the person {s} who developed the information, and made it independent of the person. The information then can be reused for various reasons. The approach facilitates individuals to search information and retrieve knowledge without contacting the original developer of the information. Codification of knowledge, while in form is beneficial for sharing purposes, will have linked costs. The plan makes it easy to transfer strategic know-how outside of the company for scrupulous reasons. It is costly to codify knowledge and generate repositories. Furthermore, we could witness information overloads, which large directories of codified knowledge could never be of use or used for reason that the knowledge is overwhelming in nature of the information gathered. Codified knowledge must be gathered from a variety of sources and made centrally accessible to all managerial affiliates. Exploitation of centralized repositories facilitates trouble-free and speedy recovery of knowledge, while eliminating duplication of efforts by departmental or organizational levels and for this reason, cost is saved.

Transferring and Dissemination

In terms, one of the prevalent barricades to managerial knowledge and its usage of knowledge is an unproductive channel between knowledge supplier and seeker.

Logjams take place from origins, such as temporal locality or the deficient in of inducements for knowledge sharing.


Ruggles

Ruggles’ (1998) conducted a study of 431 European companies, and US, which showed that creating networks of knowledge workers and mapping internal knowledge, are the two top missions for effectual KM-knowledge management.

Nowadays nearly, every knowledge repositories are web-enabled providing the broadest dissemination over the World Wide Web or via intranets. Group Support Systems are also utilized to provide useful knowledge sharing. Two of the prominent sources come form IBM’s Lotus Notes and Microsoft Exchange programs. The security data sources are important as well as capabilities of user friendliness, and are considered while providing accessible knowledge repositories. Password usage and severs on secure platforms are important while providing accessible knowledge in susceptible nature. Accessible mechanisms require user-friendly functions in an effort to utilize knowledge repositories.

The exchanging of explicit knowledge is comparatively straightforward via the electronic community. On the other hand, exchange of unspoken knowledge is easiest at what time we have a shared context, coalition, and ordinary language in non-verbal and verbal cues. It enables high-levels of understanding amongst members of organizational atmosphere.

In 1995, Nonaka and Takeuchi identified the processing of socialization and externalization, and its method of transferring unspoken knowledge. Socialization continues the knowledge unspoken throughout the transferring, whereas externalization modifies the unspoken knowledge into more explicit knowledge. Examples of socialization include on-the-job training and apprenticeships. Externalization, which comprises the utilization of metaphors and analogies to trigger dialogue amid individuals, communicates knowledge. Portions of the knowledge is, on the other hand, is lost during transfer. To endorse such knowledge sharing, businesses ought to consent for video and desktop conferencing as a practical alternative for knowledge dissemination.

Exploitation and Application of Knowledge

Members of staff using knowledge repositories within organizational recital are a key gauge of the system’s achievement. Unless people learn from knowledge and apply it, knowledge will never turn into an innovation trend. The enhanced capabilities of collecting and processing data, or to communicate through electronic device does not (on its-own), lead to improvement of human communications or actions necessarily.

The notion recently pertaining to community practices in fostering knowledge sharing and exploitations have developed interest around the world. Brown and Duguid, in 1991, argued that a significant task for organizations is to distinguish and back accessible or embryonic communities. A great deal of knowledge exploitation and application occurs within a team environment, including workgroups in organizations. The support is necessary to enforce success.

Davis and Botkin in 1994 summarized six traits of knowledge-base business. The traits include:

  1. The more the customers employ knowledge-based aids, the more intelligent they become.
  2. Knowledge-base products and services adjust changing state of affairs
  3. Knowledge-based businesses can customize their offerings
  4. Knowledge-base products and services relative have short life cycles
  5. Knowledge-based businesses react to customers in real time manner
Share