Orders are the mechanism to modify the behavior of the system and controlling the application.
They will be processed as no case sensitive, in other words, is the same to write @HELP that @Help,
but remind that if you forget the symbol @, the order will be interpreted as a declarative sentence.
Order to exit the application.
Remind data will not save in case mode test is ON.
Same result as press the 'x' in the DOS window.
Shows this information.
If not OFF, after the user receives the answer of an affirmative question (is/have/can ... ?), then the interface ask for confirmation.
Invert is the default value.
Depending of the response of the user (y/n) then reinforce or correct the knowledge. If the user inputs any other entry then the confirmation process is cancelled for this question.
Question confirmation is the mechanism that provides the interface for correct manually the current knowledge normally provided via text or automated extractions from the web.
Also changing the source to verify the assertions.
There are 3 types of strategies to correct an assertion (inputting No in the confirmation):
Let's see an example of using the detract mode:
1. Assuming the concept cat has as parent mammal twice. Obviously if you ask for it, the answer is yes. [Highlighted in green].
2. The interface will ask to the user to confirm that response. In case of response yes to the confirmation, the assertion (cat isa mammal) will be reinforced. The tendency of that knowledge will be increase. [Highlighted in blue].
Note that you can achieve the same objective inputting manually the sentence "cats are mammals" with "trust = 1".
3. In case you would like to correct the fact, you have to response No to the confirmation. In this case the tendency will be decreased. Assuming the trustworthy is set to 1, then the tendency of the assertion now will be of 2 again, but this is not enough to correct it (3 times learned that "cats are mammals" against 1 saying the contrary). The solution is asking the same question and confirming as No as times as the tendency has the assertion; but that could be very tedious, especially if the tendency is very high.
Therefore is better setting the trustworthy in a high number to set this confirmation at once.[Highlighted in red].
This process is the same than provide the sentence "cats are not mammals" with a trustworthy higher than the tendency of the assertion.
In the case of group questions (what ... ?), due to several elements can be involved, it would be very tedious asking confirmation to each of them.
E.g. what is mammal? → lion, right(y/n)? [wait for user input] → tiger, right(y/n)? [wait for user input] → cat, right(y/n)? ...
Note: when the mode is set to any value different than "off " both tendency and source filters will be automatically disabled.
if ON, when a question is asked, then the search is made also by the parents of that concept (inheritance of characteristics).
ON is the default value.
Example, in memory there is only the following knowledge:
cat is a mammal
mammal is a animal
animal is alive
so:
@mode deepsearch off
is cat alive? Unknown (the concept cat does not have alive in his feature list)
what is alive? animal
@mode deepsearch on
is cat alive? Yes (it also searched the characteristic in their parents: cat → mammal → animal[alive])
what is alive? animal, mammal, cat (due to it also includes their offspring)
Enables the object guessing mechanism.
When a question with multiple condition is asked, if ON then returns a percentage ratio of fulfilled conditions, if OFF then the return those concepts which fulfil each condition.
ON is the default value.
The higher reliability of the source has, the higher this value must be.
The number could be a positive integer with a values between 1 and 99 (less than 100).
1 is the default value.
Aseryla can manage the sense that sentences brings, but not the associated (but very important) emotional charge. The emotion is the method humans use to qualify the input; you usually accept as true, things than you read in academic books or what says someone you trust.
The 'trust' is the mechanism that the system provides to emphasize the quality of the input.
The number must have a value between 1 and 16.
1 is the default value.
If someone tells you a fact, you can assume that is true, but if you hear the same fact from different sources then you will be surer about the true of the fact.
Due to the system can't distinguish the source of the inputs, 'source' is the mechanism to differentiate them.
His value are defined as: 1 administrator, 2 registered user, 3 unregistered user, 4 books, 5 dictionaries, 6 encyclopedias, 7 web pages, 8 web searchers, 9 lexical databases, 10 to 14 are currently unused; for future purposes, 15 Contrast & Verified, 16 Absolutely True
Set the format of show the attributes to the response of a group question.
Natural is default value.
@show attrformat natural
what has leg? leg of cat, leg of table, leg of chair, leg of elephant
@show attrformat none
what has leg? cat%leg, table%leg, chair%leg, elephant%leg
These are the format how are stored into the memory (the concept name for the attributes) and therefore the format to use with @show term.
@show attrformat main
what has leg? cat, table, chair, elephant
When you want to know which the affected concepts are.
Set the format of the multiple word concepts when show the response of a group question.
Natural is default value.
@show specformat natural
what is team? football team, basketball team, rugby team
@show specformat none
what is exchange? stock_exchange
This is the format how the multiple word concepts are stored into the memory, therefore the format to use with @show term.
For save the activity (the same as is shown in the console) into the defined file.
By default this option is set to OFF.
If the flag is set to 'off' the file name and route is ignored.
Watch out, if the file can't be created or does not have writing permissions, no error message will be show in the console.
For filtering relations which tendency is lower than the indicated number on frame/set searches (question answering)
By default this option is set to zero that means no tendency filter is applied.
Example, in memory there is only the following knowledge:
cat IS nice 4
mammal IS nice 2
mammal IS big -2
so:
@mode tendfilter 2
what is nice? cat, mammal
@mode tendfilter 3
what is nice? cat
@mode tendfilter 5
what is nice? None
is cat nice? Unknown
is cat big? Unknown (negatives are also considered)
@mode tendfilter 0
is cat nice? Yes
is cat big? No
Note: when this filter is active, the confirmation mechanism is automatically disabled.
Example, in memory there is only the following knowledge:
cat IS mammal 1 1+15 (source 1=administrator / 15=Contrast & Verified)
mammal IS nice 1 1+15
mammal IS large 1 1
so:
@mode deepsearch on
@mode conffilter off
is cat nice? Yes
is cat large? Yes
@mode conffilter on
is cat nice? Yes
is cat large? Unknown
Note: when this filter is active, the confirmation mechanism is automatically disabled.
Example, in memory there is only the following knowledge:
cat IS mammal 1 4+7 (source 4=books / 7= web pages / 15 confirmed / 16 absolute true)
mammal IS nice 1 4+15
mammal IS large 1 4
mammal IS small 1 4+7
mammal IS pink -1 4+7
mammal IS yellow -1 16
so:
@mode multfilter on
is mammal nice? Unknown (confirmed source is not took in count)
is mammal large ? Unknown (only one source)
is mammal small? Yes (2 sources)
is mammal pink? No (2 sources)
is mammal yellow? No (absolutely true sources are never filtered)
Note: when this filter is active, the confirmation mechanism is automatically disabled.
Show this information.
Show the current settings.
Essentially is a reflection of the status of the settings that can be modified with @mode orders
Parameter to limit the results of a question when guess mode is on.
The answer will return only those concepts which its fulfillment percentage are higher than this threshold.
0 is the default value.
@mode deepsearch on
@mode guessing on
@show guessthres 0
@show guessperc on
what is animal or pet and can hit and is feline? cat 100%, dog 75%, bear 50%
@show guessthres 60
what is animal or pet and can hit and is feline? cat 100%, dog 75%
@show guessthres 75
what is animal or pet and can hit and is feline? cat100%
Take in count, if the threshold is 100, the results obtained with the guessing mode set to on (approximation search), are the same than the guessing mode set to off (exact search), but quite inefficient.
Only applies when guess mode is on. The answer will return the first "this parameter value" concetps.
This behaviour can be disabled, no elements will be removed from the answer, setting its value to zero.
0 is the default value.
@mode deepsearch on
@mode guessing on
@show guessmax 0
what is animal or pet and can hit and is feline? cat 100%, dog 75%, bear 50%
@show guessmax 2
what is animal or pet and can hit and is feline? cat 100%, dog 75%
@show guessmax 10
what is animal or pet and can hit and is feline? cat 100%, dog 75%, bear 50%
The answer will show the fulfilment percentage associated to every concept.
Only applies when guess mode is on.
On is the default value.
@mode deepsearch on
@mode guessing on
@show guessperc on
what is animal or pet and can hit and is feline? cat 100%, dog 75%, bear 50%
@show guessperc off
what is animal or pet and can hit and is feline? cat, dog, bear
Examples:
@show term cat
CONCEPT [9] cat noun [4]
FRAME 15
parents: mammal(2) pet(-2)
features: nice(2){very} short(1)
attributes: leg(2){4} fur(2)
skills: run(1)[filed(2)/sky(-1)]
affeted: train(1)
SETS
@show term dog
CONCEPT [21] dog noun [11]
FRAME 4
skills: run(1){specially-very} jump(2) eat(1)[meat(1)]
adjnoun: animal
ofclauses: wood
SETS
Word with more than one syntactical type, and no frame:
@show term trained
CONCEPT [22] train adjective/verb
SETS
affecteds: cat
In case the term does not exist into the memory:
@show term nono
not found
You also can ask for attributes of concepts using the symbol %
concept%attribute{%attribute ...) E.g. leg of cat → cat%leg
@show term cat%leg
CONCEPT [19] cat%leg noun [9]
FRAMES 7
affecteds: leg(2)
features: short(2)
skills: run(2)
SETS
attributes: cat
Or even for specializations (multiple word concept) using the underline
noun_noun{_noun ...) E.g. stock exchange → stock_exchange
@show term Footaball_Team
CONCEPT [21] football_team noun [12]
FRAMES
parents: team(1)
SETS
Notes:
In case the features or skills has related adverbs, those will be show following the affected element as a dash separated list embraced by curly keys.
If any skill has any interaction, this will be show following the affected skill as a slash separated list embraced by brackets.
Process an entire text file, analyzing the sentences line by line.
Quite useful for avoid inputting manually a large number of sentences or processing large text.
Using the standard settings in a common personal computer, the application is be able to process an average of 6000 sentences per hour.
It processes a file chunking their content dot by dot.
If you provide a file with the following content:
hello
world.
bye
to everyone
@load file | @load book |
---|---|
hello | hello world |
world | bye to everyone |
bye | |
to everyone |
Process a file with Internal Language Codes.
Allowing the insertion of knowledge directly to the memory, without the need of reprocessing the sentences using Internal Language.
Useful if you have preprocessed sentences (for example from the "internal_language.txt" output file from sentences processed in test mode);
or in case of need to recover the content of the memory in case the files of the system were corrupted.
For inputting knowledge directly to the memory without using the NLPkit.
Nevertheless it's highly recommended using the sentence processing and the question confirmation to apply this tasks.