kamagra how much to take

SharePoint Back-Propagation Neural Network Problem

Yeah, I know what you are thinking, but I’m not full of shit, and I know often times I bring SharePoint to probably levels it shouldn’t be taken to, but whatever. It’s actually a side project I am working on that is looking to aggregate several sets of data into a forecasting model type environment since SharePoint lends itself pretty well to the data aggregation part, and partially well to the data mining part, well, I mean it at least it kinda of exposes the required objects through the API that would otherwise be required to do it.

Ok, so for people that haven’t worked with AI before, the highest level introduction possible…

So there are basically two types of artificial intelligence, you have weak artificial intelligence, and you have strong artificial intelligence. Weak artificial intelligence doesn’t really have the capability to evolve that well, so it can be argued whether it really qualified as AI at all. It doesn’t really constitute the presence of a pattern that mimics human behavior and the concept of evolved choice, but more relies on the clever programming and raw computing power to represent behavior that may be considered to be “human”.

On the other hand, there also exists the concept of strong artificial intelligence, which is a lot different, since it implies that the behavior, and choice patterns, of humans can logically be represented. So, in essence, your patterned programming is instead representative of the human mind. I haven’t really seen anything in application that has done this, but in theory this is what an expert system that targeted a business application should adhere to, something like SharePoint, however weak AI might be a stepping stone into such arguments.

Regardless, if SharePoint, as a primary business application platform, were to be used coupled with an AI system, it would be composed / could use / whatever of three main concepts:

Expert Systems

Neural Networks (or Artificial Neural Networks [ANN])

Evolutionary Algorithms

OK, so there are several parts and concepts that make it up, The problem that I was running into was building a Back-Propagation Neural Network, however if I could get the rudimentary concept to work I plan on extending it to hopefully work with Dynamic link matching (Neuronal Modeling), which is my real interest. What’s this? Well, I am not very adapt at its concepts, but have studied it for a wee bit, and it basically is how one could theoretically use pre-defined neural systems for the recognition of external objects, which is neato cheato.

Dynamic link matching is one of the most robust mechanisms known mostly in the realm for physical pattern recognitions (or, in a broader sense, translation invariant object recognition) as it doesn’t have leave much error that is left for distortion (which generally occurs because expressions change so much during the templating process [also known as topographic mapping] and depth skews) of the inputted objects. Dynamic link matching is heavily dependent on the concept of wavlets, Gabor wavelet transform more specifically (which are responsible for grey-value distributions). The most notable thing about DLM is its low level of error rate, because it compensates well for depth and deformation within the template scan.

After the template scan has occurred, the fun stuff appears to start happening.

You can generally see something like a humanface (represented by the circular object) several little dotted nodes across it (for which the plane the image is mapped on is a neural sheets of hypercolumns), which is representative of a neuron, which, going back to the wavlet talks, also has an associated jet value, which is orchestrates the grey-value distribution.

When the actual matching is performed of the inputted object against the stored template, it leverages network self-organization. I will talk about this maybe in a later post, because there has been no posting of my problematic code yet which is starting to annoy me.

Anyhoo, I don’t remember what I was writing about now. Oh yeah, Back-Propagation. So I was working on that for a client, and my god, what a pain in the butt getting some of it to work with SharePoint was. My main problem was getting the god damn weights to update correctly. What I finally settled on was this:

private readonly Unit[][] neuralNet;

public double[] neuralData;

public static double PrimaryDerivationOfActivation(double Argument)
{
return (Argument * (1 – Argument));
}

protected void UpdateWeights(double learningCount, double influence, double decayRate)
{
for (int i = 1; i < neuralNet.Length; i++) { for (int j = 0; j < neuralNet[i].Length; j++) { Unit unit = neuralNet[i][j]; foreach (Link link in unit.InputLinks) { double lr = (((learningCount * link.Source.GetOutput()) * unit.neuralData[0]) * PrimaryDerivationOfActivation(unit.GetOutput())) + (influence * unit.neuralData[1]); unit.neuralData[1] = lr; link.Weight = (link.Weight + lr) – (decayRate * link.Weight); } } } } [/csharp] Whew, I am glad I finally got the mother to work. Anyways, I will hopefully be releasing the forecasting system if the client is hip to it, and hopefully an API that allows other developers to extend other AI applications into SharePoint in order to maybe build other applications. Or I may be the only person interested in it. Meh. :)

Share

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>