Service interface accepted commands

Intercommunication with external user interfaces which want to make petitions to Aseryla and receive their service.



Orders


shutdown: @x

   Order to shutdown the service.

   The memory content will be arranged and stored.

   NLP core service is not shutdown, manually action is required (executing "NLPstop.cmd|sh")


autosave: @w [number]

   Set the number of processed il codes to save automatically the memory.

   if the values is zero then it do not save ever (mode test), unless a force save order is received (@v)

   if parameter is not a valid number, then is set zero (disables the mechanism)


force memory save: @v [0/1]

    Saves the memory content in optimum format.

    Even with the autosave mechanism disabled (@w 0)

    If the parameter is 1 then also applies the clean and tidy up process.

    * This command activates the progress task reporter and it's cancellable.


ilc memory save: @i [location/filename]

   Saves the memory content in internal language format.


show term: @s [word]

   Return a string (to directly print out) with all the memory information (lexicon, frame, sets) about the indicated term.

   Here some examples


show concept: @e [lexicon_reference] {location/filename}

   Returns the type of word, frame content and sets relations in lexicon references format of a concept.


    Format:

    [frame weight]/[isNoun]/[isVerb]/[isAdjective]/[frame_parent_list]/[frame_negative_parents_list]/[frame_feature_list]/[frame_negative_feature_list]/

    [frame_attribute_list]/[frame_negative_attribute_list]/[frame_skill_list]/[frame_negative_skills_list]/[frame_affected_actions_list]/

    [sets_parent_list]/[sets_feature_list]/[sets_attribute_list]/[sets_skill_list]/[sets_affected_actions_list]/

    [sets_negative_parents_list]/[sets_negative_feature_list]/[sets_negative_attribute_list]/[sets_negative_skills_list] (18 slash separated lists)


    Values:

       weight = number

       isNoun,isVerb,isAdjective = 1/0

       frame_parent_list, frame_features_list, frame_affected_list = [lex_ref1 tendency1 conf1 #src1 lex_ref2 tendency2 conf2 #src2 ...]

       frame_attribute_list= [lex_ref1 tendency1 conf1 #src1 numbers1 lex_ref2 tendency2 conf2 #src2 numbers2 ...] (numbers, is an asterisk separated list or number ranges; only one asterisk if empty)

       frame_skill_list= [lex_ref1 tend1 conf1 #src1 interact1 lex_ref2 conf2 #src2 tend2 interact2 ...] interactions are lexref-tendency pair list (elements are asterisk separated; only 1 asterisk if empty)

       sets lists = [lex_ref1 lex_ref2 ...] (both positives and negatives)

       * conf(x) it could be 1 or 0 depending the relation was manually confirmed by the user

       * #src is the number of different sources the relation has

       * there is no [frame_negative_affected_list] nor [sets_negative_affected_list] due to the model does not allow negative relations for this characteristics.


   Examples:

Assuming the following data:

@show term cat

CONCEPT [9] cat noun [12]

FRAME 15

parents: mammal(2) pet(1) dog(-1) → lexrefs: mammal=13, pet=14, dog=15

features: nice(2){very} short(1) → lexrefs: nice=16, short=17

attributes: paw(3){2,4} fur(2) → lexrefs: 18, 19

skills: run(4){forest(1),sky(-2)} → lexrefs: 20{28,29}

affected: train(1) → lexrefs: 21

SETS

parents: bengal balinese ocicat → lexrefs: 22, 23, 24

neg parents: hound husky→ lexrefs: 25, 26

neg skills: jump → lexrefs: 27


Then:

@e 12

26/1/0/0/13 2 0 1 14 1 1 1 /15 -1 0 2/16 2 0 1 17 1 0 1//18 3 0 1 2*4 19 2 *//20 4 0 1 28*1*29*-2//21 1 0 1/22 23 24 ////25 26 //27


@e 0

KO: invalid reference


    * As option to optimize and avoid the socket buffer 8Mb size limitation.

    If the number of elements is higher 100 relations and the filename is reported then the results will be saved into the indicated file; In this case this order returns "+".

    It is your responsibility to provide a location that exists and has write permissions. If the file it exists it append the new content to the previous one.


    Note: the elements of each list are returned ascending sorted by their correspondent lemma.


show language stats: @a

   Returns a slash separated list of integers with the total number of existent relations and the content of the memory.


   The format is:

      words/concepts/nouns/verbs/adjectives/

      (positive)parents/(positive)features/(positive)attributes/(positive)skills/(positive)affecteds/

      (negative)parents/(negative)features/(negative)attributes/(negative)skills/

      (positive)interactions/(negative)interactions/attributes with number indicated/

      relations with more than one source/frames which weight is zero/frames which weight is one/processed sentences


get the lexicon list: @l [number] [location/filename]

   Return a comma separated list of the existent lemmas in the lexicon (with specs and attributes in natural format)

   from an specific position (useful for external interface actions of word check, autocomplete proposals and synchronizing).


    E.g.: @l 0 /temp/lexdata.csv → -,a,person,cat,leg of person,river bank,zed

           @l 5 /temp/lexdata.csv → zed

           @l 6 /temp/lexdata.csv → -

    Remind, the position in the list is exactly the lexicon reference.


    Due to the socket buffer 8Mb size limitation, the list size will be limited up to 1000 words.

    If the dictionary list is higher than 1000 elements, the results will be saved into the indicated file; In this case this order returns "+".

    It is your responsability to provide a location that exists and has write permissions. if the file exists the new content will replace the previous one.

    E.g.: @l 0 /temp/lexdata.csv → + (the list was saved into the indicated file)


check word type: @t [lexicon_reference] [type] {add/del}

   Checks if a concept has the indicated type of word (1 noun, 2 adjective, 3 verb), returns OK if true, KO otherwise


    In case the optional third parameter {add/del} is reported; for the indicated concept and type it will be removed (parameter = 1) or added (any other value)

    Adding is the same that calling the internal language code "WORD [type] concept concept"

    Watch out: Deleting implies also removing each relation associated with the indicated type. If all types are deleted, the concept will be removed when memory arrange is performed.


NLP status: @n

   Checks the NLP service status, returns the port number which is listening when it's running properly, zero otherwise


server status: @h

    Manages a heart beat signal for the interfaces could check if the server is running properly.

    It will send an OK if everything it's fine, or communication error if the server is not running or listening in the indicated port.


retrieve sentences: @d [type] [limit] [search criteria]

    Retrieves already processed sentences from the database cache.

      Parameters:

           [1|2] type "latest processed sentences" | "by search criteria"

           [number] maximum number of sentences to retrieve

           [text] the search criteria:

              - word(s) that have to match exactly (case sensitive) with any literal in any sentence, the position it doesn't matter

              - using the pipe symbol (|, that means OR) or the ampersand symbol (& that means AND) you can perform searches with multiple conditions

              - if the criteria is empty, then returns the latest processed sentences


      Returns a slash separated list with the sentences which match the criteria or "-" if not found any


   Examples:

send: @d 3 (give the last 3 processed sentences)

receive: a cat is nice/a dog is an animal/I have been running


send: @d 10 nice cat | dog (select up to 10 sentences that contains "nice cat" OR "dog"

receive: a dog is an animal


send: @d 10 ship & robot

receive: "-"


cached DB sentence: @r [sentence]

   Retrieve the results of a previous processed sentence that has been stored into the database


    Retruns empty string in case the sentence did not found into the database or the results in the format described here


   Examples:


@r bla blu

    [empty string]


@r Tigers are nice and big

    Tigers(tiger)/N are(be)/V nice/J and big/J

    (ROOT (S (NP (NNS Cats//SUBJECT//)) (VP (VBP are) (ADJP (JJ nice//OBJECT//) (CC and) (JJ big//OBJECT//)))))

    - tiger is nice / 1 1

    - tiger is big / 1 1



save words: @j [file]

    Saves the dictionary and lexicon content in internal language WORD format into the indicated file location.

      Parameters:

           [file] the location and name of the file where store the content (note: if the file already exists, it will be replaced)


      Returns "OK" if everything goes fine; "KO" in case any problem accessing or writing to the file


   Examples:

send: @j c:\

receive: KO


send: @j c:\data\ilws.txt

receive: OK



top ten weigthed frames: @p

    Returns the Top 10 relevant concepts.

    The results are a comma separated list up to 10 lexicon references, sorted from highest to lowest and filtered those ones with zero weight.


   Examples:

send: @p

receive: 12,55,88



get DB failed sentences: @f

    Returns a new line separated list with the sentences that suffered an error when they were processed. Or empty if none.

    For maintenance reasons, when a sentence could not be correctly processed due to an error they are stored into the "failed" database table.

    It's not considered a syntax or grammar failure, the fact that the NLP server was not running when the sentence was processed.



clear DB failed sentences: @c

    It deletes all the sentences from the "failed senteces" DB table.



Questions


free question formatted: ?q [parameters]

   Receives a free text question and return the results in the specified format


   parameters:

    [1|0] enable / disable deep search

    [1|0] enable / disables the guessing mode

    [0..100] threshold (filter by hitting percent) [used only when guessing mode is active]

    [number] limit results (set to zero to no limit them) [used only when guessing mode is active]

    [1|0] attach the associated hitting percent to the result elements [used only when guessing mode is active]

    [0,1] translate the specializations (0 none, 1 natural)

    [0, 1, 2] set the type of attribute translation (0 none, 1 natural, 2 main)

    [number] in the search it filters those relations which tendency is lower than the indicated (0 means no filtering)

    [1|0] enable / disable the filter by contrasted source (only relations that has been manually confirmed by the user)

    [1|0] enable / disable the filter by multiple source (only relations that has been mentioned by at least 2 different origins)

    [text] the question (is not necessary to finish with the symbol ?)


   Returns:

    - Misunderstand: [error description]

    if there is any syntactic or grammar problem


    - ?[u|n|y] ILcode

    when is an affirmative question

    ?u means Unknown, ?n No and ?y is Yes

    the ILcode is the associated with the question, to provide in question confirmation


    - None (if no relation) | Any (has positive relation but no numbers) | comma separated list with the numbers

    when is a numbered attribute question


    - None (if no elements fits the question) | (filtered and formatted based on the parameters) comma separated list with the concepts that match the conditions

    when is a group and guessing question


   Examples:

?q 0 0 0 0 0 0 0 0 0 0 what can bliblablu → Misunderstand: < bliblablu > does not exist in the dictionary


affirmative questions

?q 0 0 0 0 0 0 0 0 0 0 is cat nice → ?y 2 10 23 0


numbered attribute questions

?q 0 0 0 0 0 0 0 0 0 0 how many legs has a person → 1, 2


group questions

?q 0 0 0 0 0 0 0 0 0 0 what can handle → animal_person%hand

?q 0 0 0 0 0 1 1 0 0 0 what can handle → hand of animal person


object guessing

?q 1 1 0 0 1 0 0 0 0 0 what is mammal and nice or good → cat 100%, lion 100%, tiger 100%, dog 66% elephant 33%

?q 1 1 75 2 1 0 0 0 0 0 what is mammal and nice or good → cat 100%, lion 100%

?q 1 0 30 10 1 0 0 0 0 0 what is mammal and nice or good → cat, lion, tiger

(as guessing mode disabled, the threshold, limit results and show percent are not taken in count)


interactions

?q 0 0 0 0 0 0 0 0 0 0 what does mammal eat → meat, vegetables

?q 1 0 0 0 0 0 0 0 0 0 what can jump fences → dog, cat, pet, mammal, animal


affirmative interactions

?q 0 0 0 0 0 0 0 0 0 0 can cats jump the stones → Yes



question confirmation: ?c [parameters]

   Performs a correcting or reinforce a relation, depending of the question confirmation evaluation


   parameters:

    [1..5] key (1 isa, 2 is, 3 have, ...)

    [number] subject lexicon reference

    [number] object lexicon reference

    [string] extra (set the slash symbol if it does not have extra)

    [number] trust

    [1..16] source

    [1|2|3] type of correction

    [1|0] the user response (true/false)


   Examples:

?q 0 0 0 0 0 0 0 is cat nice → ?y 2 10 23 0 (tendency = 1)

?c 2 10 23 0 4 1 2 1 (reinforce)

?q 0 0 0 0 0 0 0 is cat nice → Yes 2 10 23 0 (tendency = 5)

?c 2 10 23 0 2 1 2 0 (correcting) [applies invert mode, change the tendency sign, respecting the frequency]

?q 0 0 0 0 0 0 0 is cat nice → No 2 10 23 0 (tendency = -5)



frame direct search: ?a [parameters]

   Performs a direct memory search (no graph parsing) based in an affirmative question


   parameters:

    [is|have|can|canbe] the question key

    [subject] reference to the concept

    [object] reference to the concept

    [extra] a positive number (in case attribute relation)

    [1|0] deep search

    [number] tendency filter

    [1|0] confirmed source filter

    [1|0] multiple origin filter


    Returns the same ILcode of the petition with the answer (the tendency of the relation) added or ?k and the error description


    Examples:

Send: ?a is 6 55 0 0 0 0 0 [first 0 is the extra, the second one is the deep search, the next 3 are the filters]

Receive:?a is 6 55 0 3 (3 is the positive tendency → Yes)


Send: ?a is 6 55 0 0 0 1 1 [first 0 is the extra, the second one is the deep search, the next 3 are the filters]

Receive:?a is 6 55 0 0 (0 → Unknown due to filters)


Send:?a have 10 23 4 1 0 0 0 (search in deep: have persons 4 legs? [assuming 10 is the lexicon reference for "person", and 23 for "leg"])

Receive:?a have 10 23 4 2 (4 is the extra, 2 the answer → No)


Send:?a canbe 11 33 0 0 0 0 0

Receive:?a canbe 11 33 0 0 (Not exist or it has neutral tendency → Unknown)


Send: ?a 2 10 23 0 0 0 0

Receive: ?k the subject must be a noun



sets direct search: ?s [parameters]

   Performs a direct memory search (no graph parsing) based in an group (with only one condition) question


   parameters:

    [is|have|can|canbe] the question key

    [subject] reference to the concept

    [object] reference to the concept

    [extra] a positive number (in case attribute relation)

    [1|0] deep search

    [number] tendency filter

    [1|0] confirmed source filter

    [1|0] multiple origin filter

    [number] maximum number of elements to return (reducing the process time); if zero then no limit is applied


    Returns the same ILcode of the petition with the answer added (a whitespace lexicon reference list [limited up to 20 elements]) or ?k and the error description


    Examples:

Send: ?s is 0 10 0 0 0 0 (what is person? [assuming 10 is the lexicon reference for "person"])

Receive:?s is 0 10 0 22 12 56 (22, 12 and 56 are the elements that fit the relation)


Send: ?s have 32 10 2 0 0 0 (what cat has 2 person? [assuming 32 is the lexicon reference for "cat"])

Receive:?s have 32 10 2 0 (the last zero means None in the answer)


Send: ?s 5 10 23 0 0 0 0

Receive: ?k A "can" key requires a verb object



numbered attribute direct search: ?n [parameters]

   Performs a direct memory search (no graph parsing) based in an group (with only one condition) question


   parameters:

    [subject] reference to the concept

    [object] reference to the concept

    [1|0] deep search

    [number] tendency filter

    [1|0] confirmed source filter

    [1|0] multiple origin filter


    Returns the same ILcode of the petition with the answer (a comma separated number list or None) added or ?k and the error description


    Examples:

Send: ?n 15 10 1 0 0 0 (how many legs has a person? [with deep search activated])

Receive:?n isa 15 10 0 1, 2, 4 (the answer could be directly printed)


Send: ?n 32 10 0 0 0 0

Receive:?n isa 32 10 0 None (Note: in this case of petition, the key and the extra does not matter which value has)


Send: ?n 88 99 0 0 0 0

Receive: ?k The characteristic does not exists into the lexicon



object guessing direct search: ?g [parameters]

   Performs a direct memory search (no graph parsing) based in a multiple condition questions


   parameters:

    [1|0] deep search

    [1|0] aproximation mode

    [number] tendency filter

    [1|0] confirmed source filter

    [1|0] multiple origin filter

    [number] maximum number of elements to return (reducing the process time); if zero then no limit is applied

    - ILcode condition 1

       [1|2|4|5] operator (1 and, 2 or, 4 not and, 5 not or)

       [be|have|can|canbe|doiac|caniac] key

       [characteristic] reference to the concept

       [concept] reference to the concept

       [extra] a positive number (in case attribute relation)

    - ilc condition 2

    - {ilc condition 3}

    - {ilc condition ...}

             Note: the operator of the first condition is ignored. The same with the concepts of the rest of the conditions.

             It really could be removed, but this uniformity in the format, it allows to parse and split easily, as every condition has 5 elements.


    Returns the results in a list (up to 500 elements) of (reference, percentage_hit_ratio) pairs.

    * This command activates the progress task reporter and it's cancellable


    Examples:

Send: ?g 1 1 0 0 0 0 1 be 15 10 0 2 have 44 10 3 4 have 55 10 0 (what {1} animal[10] is[1] nice[15] {0} or[2] have three[3] legs[44] and not[4] can[3] jump[55] {0}? )

Receive:?g 11 100 12 50 13 25 (elephant 100%, cat 50%, bird 25%)


Send: Send: ?g 1 1 0 0 0 0 1 be 33 0 0 1 be 44 0 0 (what is animal and robot?)

Receive:?g (none)


Send: Send: ?g 0 0 0 0 0 0 1 caniac 22 33 0 1 be 44 11 0 (what animal[11] can jump[22] person[33] and is nice[44]?)

Receive:?g 55 0 (canary[55] in a no deep search and strict mode)


Send: ?g 1 1 0 0 0 20 1 1 1 1 1

Receive: ?k Incomplete command



free format direct search: ?f [parameters]

   Performs a free format question, but the response is the same that if you call direct search questions (?a, ?s, ?n, ?s, ?g, ?i, ?t)


   parameters:

    [1|0] deep search

    [1|0] guessing mode

    [number] in the search it filters those relations which tendency is lower than the indicated (0 means no filtering)

    [1|0] enable / disable the filter by contrasted source (only relations that has been manually confirmed by the user)

    [1|0] enable / disable the filter by multiple source (only relations that has been mentioned by at least 2 different origins)

    [number] maximum number of elements to return (reducing the process time); if zero then no limit is applied

    [text] the question (is not necessary that finish with the ? symbol)


   Returns the correspondent ilc direct answer question code or ?k with the error description


   Examples:

Send: ?f 0 0 0 0 0 0 have persons 4 legs (affirmative question)

Receive:?a have 10 23 4 2 (code)


Send: ?f 1 0 0 0 0 0 what can run (group question)

Receive:?a can 57 0 0 25 46 78 84 (code)


Send: ?f 0 0 0 0 0 0 how many legs has a person (numbered attribute question)

Receive:?n isa 32 10 0 None (code)


Send: ?f 1 1 1 1 1 20 what is

Receive: ?k Incomplete command




sets interactions direct search: ?i [parameters]

   Performs a direct memory search (no graph parsing) based in an interaction question


   parameters:

    [do|can] the question key

    [action] reference to the action verb

    [object] reference to the concept(do) or reciver(can)

    [1|0] negative particle question

    [1|0] deep search

    [number] tendency filter

    [1|0] confirmed source filter

    [1|0] multiple origin filter

    [number] maximum number of elements to return (reducing the process time); if zero then no limit is applied


    Returns the same ILcode of the petition with the answer (a whitespace lexicon reference list) added or ?k and the error description


    Examples:

Send: ?i do 20 10 0 0 0 0 0 0 (what do person eat? [assuming 10 is the lexicon reference for "person" and 20 for "eat"])

Receive: ?i CAN 10 20 0 d0 11 12 13 (11, 12 and 13 are the elements that fit the relation)

    - key: always is CAN (skill list)

    - subject: 20 (person in this example)

    - verb: 20 (eat in this example)

    - object: 0 (unused)

    - extra: 2 chars → type[(d)o|(c)an] neg(1/0); so "d0" means "what do"


Send: ?i can 20 10 1 1 3 1 1 10(what can not eat person? in deep search and filters activated limiting the search up to 10 elements)

Receive: ?i CAN 10 20 0 c0 0 (empty value in the answer → None / extra: c1 means "what can not")


Send: ?i can 222

Receive: ?k Incomplete command



affirmative frame interactions direct search: ?t [parameters]

   Performs a direct memory search (no graph parsing) based in an affirmative interaction question


   parameters:

    [concept] reference to the concept

    [action] reference to the verb

    [receiver] reference to the concept

    [1|0] deep search

    [number] tendency filter

    [1|0] confirmed source filter

    [1|0] multiple origin filter


    Returns the same ILcode of the petition + the answer added (1 if the interaction relation exists with positive tendency; 0 otherwise)

             or ?k and the error description


    Examples:

* Assuming 11 is the lexicon reference for "person"; 22 for "jump"; 33 for "fence"


Send: ?t 11 22 33 0 0 0 0 (can persons jump the fence?)

Receive:?t can 11 22 33*1 1 (the last 1 means the interaction exist with positive tendency → Yes / 33*1 is the ILC extra format for the interactions)


Send: ?t 11 0 33 0 0 0 0 (can persons [invalid word] the fence?)

Receive:?k the action must be a valid verb



obtain search path: ?p [parameters]

   Performs a search to solve a question but returning the visited concepts path.


    parameters: internal language code

    returns: the visited concepts path if the question conditions are solved (found the characteristic and extra with positive or negative tendency) or empty if not found

                   path format = subject { [isa|have] node1 [isa|have] node2}} {not_}keycode characteristic {{not_}extra}


    The search is always performed in deep mode and without filters.


    Examples:

Send: ?p cat can jump stone

Receive: cat isa pet isa mammal have leg have fierce_claw can jump stone


Send: ?p robot is live

Receive: robot isa device not_is live


Send: ?p can pigeon fly ocean

Receive: pigeon isa bird can fly not_ocean (the pigeon can fly but not the ocean)


Send: ?p cat can fly

Receive: (empty string)

    Note: The attributes are naturalized, but not the specializations nor the negative clauses, so the results string is whitespace separated.



obtaing ontology graph: ?o [parameters]

   It saves into an indicated file the ontology (which are their parents and sons) of a given concept.


    parameters:

       [concept] reference to the concept

       [file] path and file name of the file where the graph edges will be stored.

    returns: OK + the pair list of edges stored in the indicated file or KO: {description} if anything goes wrong.


    Examples:

Send: ?o 75 c:\temp\graph.txt (assuming 75 is the lexicon reference for "device")

Receive: OK (in the file will be stored [("electronic device","DEVICE"), ("DEVICE","unit"), ("unit","machine")]


Send: ?o 13 c:\temp\graph.txt (assuming 13 is the lexicon reference for "jump", that does have any parent reference)

Receive: OK (in the file will be stored [("JUMP","JUMP")]


Send: ?0 0 edges.txt

Receive: KO: The concept does not exist




Text processing


text free: Tl [parameters]

   Given a text, perform a complet language process (parsing, lexical, gramatical semantical, learning)


    Parameters:

    [number] trust

    [1..16] source

    [text] the sentence to process


    The format of the results are:

    The cleaned sentence with the POS description (only for the semantic words) and the lemma in parentheses (only if it's different than the word) [new line]

    Grammar tree provided by the Stanford Core NLP with subjects, objects and link verbs are annotated [new line]

    ILcode1 [new line]

    ILcode2 [new line]

    ...


   Examples:


Tl 1 1 I will run tomorrow

    NLP kit not available


Tl 4 2 Cats are nice. Dogs are good and big


    cats(cat)/N are(be)/V nice/J

    (ROOT (S (NP (NNS Cats//SUBJECT//)) (VP (VBP are) (ADJP (JJ nice//OBJECT//)))))

    - cat is nice / 4 2


    Dogs(dog)/N are(be)/V good/J and big/J

    (ROOT (S (NP (NNS Dogs//SUBJECT//)) (VP (VBP are) (ADJP (JJ good//OBJECT//) (CC and) (JJ big//OBJECT//)))))

    - dog is good / 4 2

    - dog is big / 4 2


Tl 1 2 the cat has been pushing to the dog


    the cat/N has(have)/V been(be)/V pushing(push)/V to the dog/N

    (ROOT (S (NP (DT the) (NN cat//SUBJECT//)) (VP (VBZ has) (VP (VBN been) (VP (VBG pushing//VERBLINK//) (PP (TO to) (NP (DT the) (NN dog//OBJECT//))))))))

    - dog canbe push / 1 2

    - cat can push dog*1 1 2


Tl 1 2 cars don't have feathers


    cars(car)/N do/V not have/V feathers(feather)/N

    (ROOT (S (NP (NNS cars//SUBJECT//)) (VP (VBP do) (RB not) (VP (VB have//VERBLINK/neg//) (NP (NNS feathers//OBJECT//))))))

    - car have feather / -1 2



load text file: Tf [parameters]

    Applies the full language phases and memory storing to the content of a text file.

    * This command activates the progress task reporter and it's cancellable


    Parameters:

    [number] trust

    [1..16] source

    [0|1] split the content by line or by dot (sentence separator)

    [0|1] 1 save all the sentences and knowledge, 0 no store empty sentences or those with only specialization or NER relations

    [route] path to the file


    Note: for performance reasons, only a few of the processed sentences are stored into the database.


   Examples:


Tf 1 1 1 0 not_exists.txt

    KO file is not accesible


Tf 1 1 1 0 ./text/doc.txt (by dot) → process the sentences "a", "b c d", "e"


Tf 1 1 0 0 ./text/doc.txt (by line) → process the sentences "a", "b", "c", "d", "e"


         Assuming "./text/doc.txt" file content is:

a. b

c

d. e



load ILcodes file: Tc [1|routeErrorFile] [routeIlcFile]

    Process an internal language codes file.

    The provided file only can contain internal language codes; no empty lines nor remarks (old format: lines that start with double asterisk or equal symbols) are allowed.

    * This command activates the progress task reporter and it's cancellable


    Parameters:

    - 1: it means that stops when finds the first error. Even though every correctly processed ilc (until the error) has been stored into the memory.

    - File location: the entire input file is processed, and the found errores stored into the indicated file.

       Note: nor the filename nor the path could contains whitespaces


   Examples:

Tc

KO incomplete command


Tc 1 not_exists.txt

KO file is not accesible


Tc 1 c:/aseryla/data/ilcodes1.txt

OK 3 correctly processed / 0 failed


Tc 1 ilcodes2.txt

KO processing (WORD k FAIL FAIL) on line (3)


Tc ./ilcerr.out ilcodes2.txt

OK 3 correctly processed / 2 failed


         Assuming "ilcodes1.txt" file content is:

WORD N mammals mammal

WORD V jump jump

mammal canbe jump

         Assuming "ilcodes2.txt" file content is:

== bad ilc1


WORD N noun noun

WORD J adj adj

WORD k FAIL FAIL

noun IS adj /

mammal incomplete


         And the following content will be saved in "ilcerr.out":

processing (WORD k FAIL FAIL) on line (3) from (bad ilc1)

processing (mammal incomplete) on line (5) from (incomplete sentence)


process ILcode: Ti [ilc]

   Process an internal language code.

   Examples:

Ti WORD N cats cat

OK


Ti WORD H blabla

KO (H) is not a valid type [N/J/V]


Ti cat isa cat / 2 1

OK


Ti cat can blabla

KO An skill relation requires a verb object (blabla)



clean text: Tp [text]

   Get a text and then it clean, prepares and split into sentences (but they are not processed).

   Returns a slash separated list with the parsed sentences.


   Examples:

Send: Tp The cat is .3490sdf# nice. But not large.

Receive: The cat is nice/But not large/


Send: Tp hello

Receive: hello/


Send: Tp .

Receive: / (empty, no sentences)

       * Remind there is a socket buffer 8Mb size limitation. Therefore, if the result list has a higher size it will return "-" response.



cleans a file: Td [chunk] [file]

   Parses the content of a text file, splitting their content into cleaned sentences.


   Parameters:

    [0|1] split the content by line or by dot (sentence separator)

    [route] path to the file


   Returns a slash separated list with the parsed sentences. Or a "Tk" command with the error description.


   Examples:

Send: Td 1 text.txt

Receive: sentence1/sentence2/...


Send: Td 0 notexist.txt

Receive: Tk File is not accesible


* Check the examples of load text file



extract ilcs: Ts [sentence]

    It process a sentence and return their relations (in internal language code format).


    Parameters:

    [ONE sentence] it assumes there is only one sentence, it doesn't apply text cleaning/splitting


   Returns:

       a newline separated list with the ILCs

       K0 if not relations has been identified or if the input is not a valid sentence (filtered in the lexical part)

       K1 if the NLP server is not ready

       K2 in case any processing error


   Examples:

Send: Ts %%%%

Receive: K0


Send: Ts cats are nice

Receive: K1


Send: Ts regular animals don't have 20 eyes

Receive: animal IS regular / 1 1\nanimal HAVE eye 20 -1 1\n




text processing: Tb [saveAll] [text]

    It process an entire text, splitting into sentences (it assumes the dot is the sentence separator),

    cleaning those sentences, sending them to the NLP in a "bunch" mode to increase the performance,

    then it analyses to extract their ILCS, and finally storing the results into the memory BUT NOT into the database.


    If the parameter SaveAll is 1 then all the sentences and knowledge is stored into the memory, otherwise it will not store empty sentences or those with only specialization or NER relations.


    It returns the number of sentences processed if everything goes fine

    or "Error: {reason}" in case any error has been produced


   Examples:

send: Tb 1 Cats eat. d. Rome is a city of Italy

receive: 2

       { it will be processed as: "Cats eat", "d" [this is filtered], "Rome is a city of Italy" }


send: Tb 1 the cat is nice

receive: Error: NLP kit not available

       { assuming the NLP is not running }


send: Tb 1 a mouse is hunted by a eagle. That eagle didn't hunt before. A mouse is nice.

receive: Error: grammar processing the sentence (a mouse is nice)

       { assuming that sentence is not correctly processed }


send: Tb 0 the cat is nice. there is Robert Walker

receive: 2 (but the as the second sentence don't have any relation, is filtered)

       { assuming that sentence is not correctly processed }


    The process stops at the first error is found. Both trust and source will be set as 1.

    Due to the 8MB socket limitation is not recommended a text length higher than 2MB, although this command do not apply any check about this.



sentence analysis: Ta [text]

    It analyzes a sentence returning the syntax, grammar and semantic relations WITHOUT saving them into the memory.


    In case the text has more than one sentence, only the first one is analyzed, the others are discarded.

    It returns "KO: {reason}" in case any error has been produced

    or the analysis of the sentence using the following format:

       - the cleaned sentence [newline]

       - the syntax description for the semantic words (format: WORD{(LEMMA) if different than the word)}{[person,location,organization] in case a NER}/TYPE[J,N,V,D]

       - the grammar results, the identified {SUBJECTS}/{VERB LINKS}/{OBJECTS} (all of them are comma separated lists) [newline]

       - the semantic results {newline separated list with the extracted ILcode codes}


   Examples:

send: Ta (empty text)

receive: KO: no valid sentence has been found


send: Ta key not found (sentence without relations)

receive:

       key not found

       key/J found(find)/V

       //


send: Ta Mark was pressed. The order was sent. (more than one sentence)

receive:

       Mark was pressed

       Mark(mark)[person]/N was(be)/V pressed(press)/V

       /pressed/mark

       mark isa person / 1 1

       mark canbe press / 1 1


send: Ta some paws of that cat hit the lazy dog or these foxes (multiple relations)

receive:

       some paws of that cat hit the lazy dog or these foxes

       paws(paw)/N cat/N hit/V lazy/J dog/N foxes(fox)/N

       cat%paws/hit/dog,foxes

       cat have paw / 1 1

       dog is lazy / 1 1

       dog canbe hit / 1 1

       fox canbe hit / 1 1

       cat%paw can hit dog*1&1 1 1

       cat%paw can hit fox*1&1 1 1


send: Ta the nice cat has two paws (with numbers)

receive:

       the nice cat has two paws

       nice/J cat/N has(have)/V two(2)/C paws(paw)/N

       cat/has/paws

       cat is nice / 1 1

       cat have paw 2 1 1



add knowledge: Ty [ilc]

    Given an internal language code it appends all the necessary knowledge to be correctly inserted into the memory.

    This include all the affected concepts with their correspondent type, the needed relations or even adjust the tendency of the existent relations.

    Though a internal language code it allows multiple extras (numbered attributes or interactions), this command has the limitation of manage only ONE extra element.


    It returns "OK" if everything goes fine or a "KO: {reason}" in case any error has been detected.


   Examples:

send: Ty cat isa pet

receive: OK

and the following ilc has been processed: [WORD N cat cat], [WORD N pet pet], [cat isa pet / 1 1]


send: Ty CAT1 isa pet

receive: KO: The concept (CAT1) has an invalid format word


send: Ty cat%leg can scratch fence 2 3

receive: OK

and the following ilc has been processed: [WORD N cat cat], [WORD N leg leg], [WORD V jump jump], [WORD N fence fence], [cat have leg / 2 3], [cat%leg can scratch fence*1&1 2 3]



word sense disambiguation: Tw key1{/key2/..} neg[0|1] charact1{/charact2/...}

    Given a concept, any key/verb and/or any characteristic it returns the Word Sense Disambiguation (when a concept has different meanings, it decides which one is the correct in this context).

    Eg. that mouse was broken → the word "mouse" in this context it refers a "PC device" not an "animal"


      Parameters:

           concept = lexicon reference

           keys = slash separated list with the keys [isa/is/have/can/canbe/{lexicon reference to a regular verb}] of the relation

           isNeg = [0|1] if its a negative relation (1)

           characteristics = slash separated list with the lexicon reference to the characteristic to analyze


      It returns "KO {reason}" in case any trouble or "percentage/lexref1{/lexref2/...}" with the results

           where the first element is the percentage of conditions that match with the returned elements,

           and the rest are the lexicon references to the elements that match with the asked relation/characteristics

           if the percentage is zero, it means "no sense" has been identified, the details are provided in the second element of the list

           1 = the key is ISA (parent relation); it has no sense to determine the type of object using the ontology (as it's the same for all the possible senses).

           2 = the concept has no parents, so the sense of the concept is the concept itself

           3 = no characteristic has been found in any element of the ontology of the concept


      The algorithm applies the following rules:

           case a) if the key is ISA returns 0/1

           case b) if the concept doesn't have parents returns 0/2

           case c) if no characteristic has been found into the ontology of the concept return 0/3

           case d) regular rule for the rest of scenarios

               Regular rule:

               - Creates a branch for each parent of the concept, if 2 branches has any element in common is considered the same branch

               - Checks if any element of each branch has the indicated relation/characteristic *1 (strong match)

                   if not then checks if the characteristic is found in any other property list *2 (weak match)

               - Get the element of each "matched" branch with higher weight with the identified strong/weak match


         The described algorithm is applied for each key/characteristic pair, and the final result is calculated following the rule that a 100%

         is achieved if a branch has strong match for each key and characteristic, and reducing that percentage when the match miss or is a weak match.


E.g.: Having the following memory scenario:

     mouse [weight=5] isa pet / isa mammal / isa electonic_device

     pet [weight=2] isa mammal / is nice

     mammal [weight=2] isa animal / is not metallic / can jump / can eat food

     electronic_device [weight=1]isa device / canbe jump

     device [weight=3]isa machine / is metallic / can start

     machine [weight=0] no relations


     So the branches are: [pet, mammal, electronic_device]

     → [[pet, mammal], [mammal, animal], [electronic_device, device, machine]]

     → [[pet, mammal, animal], [electronic_device, device, machine]]

     → [animal, device]

send: Tw {mouse} is

receive: KO incomplete command


send: Tw {mouse} isa 0

receive: 0/1 (parent relation)


send: Tw {animal} have 0 {leg}

receive: 0/2 (animal has no parents)


send: {mouse} have 0 {leg}

receive: 0/3 (no element has evicence)


send: {mouse} is 0 {metallic}

receive: 100/{device} (device match strong)


send: {mouse} is 1 {metallic}

receive: 100/{animal} (mammal match strong)


send: {mouse} have 0 {nice}

receive: 33/{animal} (pet has weak evidence)


send: {mouse} is 0 {nice}/{metallic}

receive: 50/{animal}/{device} (device match strong with metallic, but no with nice / viceversa for pet)


send: {mouse} can 0 {jump}/{eat}

receive: 100/{animal} (mammal match strong both characteristics / electronic_device weak for jump)


send: {mouse} {jump}/{eat} 0 {food}

receive: 75/{animal} (mammal match strong eat/food, weak with jump/food / electronic_device weak for jump, no for eat nor food)



PROGRESS TASK REPORTER

     Some commands could spend several time for processing their tasks.

     So the system will write the current percentage of the progress of executing the task into the auxiliary file "./data/progress.txt" (and also it displays in the server console).

     This file is created when a [Tc] Load ILC file, [Tf] Text File Processing, [?g] Object Guessing or [@v 1] memory arrangement command is called;

     It's updated each 10 seconds; and removed when the task finishes (when the socket receives the command results).


TASK CANCELLATION

     During the execution of any of these commands, if you create the file "./data/cancel.txt" the current task will be interrupted.

     [Tc] and [Tf] will return an error description with the exact point where the task was interrupted;

     [@v 1] will act as the parameter 1 was not passed, in other words, the memory will be saved but not arranged; [?g] will return empty results.