It's an impressive achievement, but I've been around a long time and seen lots of those. It's even more impressive given the age of the dev, and I hope he gets the chance to improve, develop, and contribute to the art and science in general.
His bot just said this, though:
"It's I know a fair bit."
That just screams "Eliza!" to me. I remember implementing Eliza back in 1979, copying the text from a magazine, and then looking to see how I could improve it. I did write a much better version that didn't make so many grammatical howlers, but that's now lost in the mists of time.
It's interesting to see how much, and simultaneously how little, the art/science has progressed, especially given the wealth of tools and data available.
While I understand where you are coming from, obviously the bot is not flawless, but it is ridiculous to compare our technology today to Eliza. ACUMAN uses a wealth of knowledge bases and natural language processing to comprehend and accurately give responses. Obviously, the field is invariably complex, but I find it absurd to say that it has increased so little, considering that it was never dreamed of that a bot would be able to answer the exact release date of a specific musical album, for instance.
It is still a work in progress, but I don't understand why it would be a huge turnoff that there was a simple grammatical error in the bot where he inadvertently prepended a contraction "it's" before the actual answer.
From the two responses I've had it would seem that my comment is going to be misunderstood and misinterpreted. Fair enough. I could write essays on this (in fact, I have when I took Philosophy as a minor to accompany my Pure Maths and Computer Science courses) and comments here just aren't going to cut it.
As I say, fair enough.
The responses I'm getting from it have a consistent feel. They are either reciting facts, or they are totally non-committal. Here:
"I don't like picking favorites. Everything
that is something regarded with special favor
or liking seems good to me."
That has all the feeling of a canned response.
I know the advances that have been made, I know how impressive this thing actually is, I know how hard it is to do this, and I know the work that must have gone into it. And even knowing all of that, when it makes what are to humans trivial grammatical mistakes, it takes away from all that and reduces it to a machine.
My comments are not intended to diminish the accomplishment, but to highlight a place where the illusion gets broken. If you have a large, totally white canvas with one small, off-center black dot, what will people look at?
So please understand that this is not a dismissal of the work, or of the achievement, but a highlighting of one specific point that disproportionally detracts from the effectiveness.
To continue to provide feedback to try to help, it just rendered like this:
Latest Firefox on latest Ubuntu. edit: That's just a grab of part of the screen, I can do the whole window if you'd like to see the context.
Edit: It would be nice to have some details about how it is doing what it's doing. They might be there, but I haven't had time to rummage much, as I'm in the middle of other things. Someone mentioned AIML. I will be back to look again later.
Its response to your first question is in fact exactly right, because you didn't ask what you thought you asked. Your brackets are in the wrong place. You asked:
e^i(pi)+1
which is:
(pi * (e^i) ) + 1
which is:
2.69740975.. + 2.64355906... i
So it was right.
And sqrt(i) is sqrt(sqrt(-1)) which is (-1)^(1/4), so again, it was right (excepting that it put "you" instead of "i")
Progress isn't just about the upper bounds of human achievement, but also the average. I think it's a mistake to confuse the two, or to place one too high above the other. Of all people you should appreciate that. :) If the next generation tackles calc or recursion even one year earlier or one percentage point more often than yours did, that's a huge cumulative win.
I do know that - I know that very well. I'm just interested to see that with all the vast amount of progress that has been made, and all the tools available for language analysis, this system still make mistakes like that.
I'd love to see more about how much this is gluing together existing components, and how much is original to the project. Back when I was doing this everything had to be written by hand, from scratch. So much time was spent/wasted in basic, underlying stuff. It's fantastic that people now have so many resources that they don't have to start from so far back, and the inventiveness can shine through.
I'd love to see the inventiveness in this and other projects, but it's hard to see what's really new.
That's the interesting thing- I am not, unlike most chatterbot projects, using existing technologies and jumbling it together- a quick glance at the "read more" page on the technologies of ACUMAN will tell you otherwise.
Everything is made from scratch. From the ground-up. I could have used AIML (the AI version of XML), or anything I wanted. But instead, to make it unique and customizable to the demand of this project, I created my own markup language entitled ACUMANSCRIPT. Even the sentiment analysis algorithms are made from scratch by using Naive Bayens algorithms. The only APIs in use are those for some aspects of the knowledge base, which is a must in any machine learning or artificial intelligence project, as not leveraging the massive amount of data available on the internet would be a waste.
I sincerely doubt that we are anywhere near a flawless piece of artificial intelligence software, but this is not a result of people's lack of diligence in the field, but rather a testimony to it's complexities.
Google has made a short documentary explaining why natural language processing and machine learning are some of the hardest fields due to their surprisingly complex nature:
Right, that's much more interesting, especially making your own markup. Did you read extensively and decide that AIML and friends were not really suitable and hence you had to make your own, or did you make your own because you wanted to get a feel for the problem domain before trying to take advantage of the work others have done?
... doubt that we are anywhere near a flawless piece
of artificial intelligence software, but this is not
a result of people's lack of diligence in the field,
but rather a testimony to its complexities.
That is certainly true.
... a quick glance at the "read more" page on the
technologies ...
That button isn't working on my browser: the latest Firefox on the latest Ubuntu. When I get time I'll look at it using Chrome, but I've got other stuff on at the moment. It would be interesting to know more about ACUMANSCRIPT - are you intending to release it or otherwise make it available? How are you tapping into the "knowledge bases"? If these questions are answered in the (currently no working for me) link on your page then don't answer them again here, but you might want to write a page specifically talking about these things and submit that here.
Thank you for expressing interest. You are correct, there are some issues that may be encountered with Firefox, particularly in a non-OS X or non-Windows environment. I am currently working on fixing those bugs.
I decided that AIML was not useful to me because it was too rigid in syntax and was too static. This is particularly in the manner in which most AIML interpreters parse the syntax, making a rigid "if hello is in string, then say nice meeting you" syllogism (basically a beefed-up conditional statement) that essential has no room or place in actual artificial intelligence. I could have adjusted it to match my needs by creating my own interpreter, but I thought if I was going to accomplish that, why not create my own syntax that matches my needs better and has syntax that is even less convoluted and more focused on pattern-matching and machine learning?
I thought of releasing an API where you can code using the syntax of ACUMANSCRIPT- but I am unfortunately not prepared for open-sourcing ACUMAN's main architecture code due to it still competing in contests and fairs.
If you're interested, I have some ideas for how to prevent the specific problem that immediately caught my attention. I'm pretty sure it could be coupled to your front end as a separate module/system that then lets you use it or not as you like, and see how different people react to it. Contact details in my profile.
My ideas are definitely half-baked and may be completely wrong/useless/inappropriate, or it may even be that you prefer to continue to work largely independently. That's not a bad thing, as getting input from other people may stifle your own creativity and ideas.
Regardless, thanks for the responses and for the interesting project.
I do not care about speech recognition when we are talking about AGI.
I care about AGI being able to understand written text - the most natural form of communication between humans and AGI.
The problem is that AGI is not able to understand written text well, cannot build good mental models and cannot make meaningful decisions based on them.
His bot just said this, though:
That just screams "Eliza!" to me. I remember implementing Eliza back in 1979, copying the text from a magazine, and then looking to see how I could improve it. I did write a much better version that didn't make so many grammatical howlers, but that's now lost in the mists of time.It's interesting to see how much, and simultaneously how little, the art/science has progressed, especially given the wealth of tools and data available.