Commit 154e786a authored by Boksebeld, N.H.J.'s avatar Boksebeld, N.H.J.
Browse files

Added old Choregraphe examples and the introductory README for Choregraphe

parent 04880af7
# Choregraphe
Choregraphe uses a simple structure in which you can connect different blocks together to pair actions sequentially or in parallel. However, don’t be fooled by these simple looking blocks. The only reason they are there is to give a better overview of long programs/actions. If you double click on the blocks, they reveal their source code in Python. So in reality, each of the blocks is actually a class that defines a certain action. You can even create your own blocks if you want to.
Each of the blocks has certain inputs and outputs. The outputs are triggered if some criteria is met inside the block, and any block connect to the activated outputs will then be started. The most common output is `onStopped`, which triggers if the block has completed. Blocks can also pass data through these outputs, for example recognized text or the number of recognized faces. Some blocks can be active indefinitely, and will trigger certain output ports regularly. We will see some examples soon. You can also double click on any of the inputs/outputs during runtime to trigger it!
Each program you create is actually a “new” block on its own, with input ports and output ports. Therefore, you can import entire programs as a block into another program. You can directly run your program, but you can also set it to trigger on certain criteria by only connecting blocks to special input ports. When a block connected to the output is triggered, your whole program stops - even if some blocks have not yet finished running. Similar to blocks, you can also output data or output different events if you like. In short, the possibilities are endless!
### Virtual robot
It would be nice to verify certain behavior before running it on the real robot, such as any kind of movement. Choregraphe offers a so-called virtual robot exactly for this purpose. To initialize and connect to your virtual robot, follow these steps:
* `Edit > Preferences > Virtual robot` - Change the robot to Pepper.
* `Connect > Connect to virtual robot`
This robot has limited capabilities compared to the real robot: any visual, auditory or sensory functions do not work. This virtual robot is really only meant for checking if you are not producing any dangerous movement. We will see a 3rd party simulator later on, which allows you to simulate other behavior as well. When you connect to the real robot, this virtual robot actually becomes the real robot and you can toggle the camera views for example. You can also configure detected faces to be shown in the 3D world.
### Examples
This folder contains three examples. Note that only the `motion` program can run on the virtual robot.
`get_age`
> This program uses facial recognition to determine the age based on someones face. Once the age has been estimated, the robot will say it out loud.
`motion`
> Shows different types of motion that can be created. Note that special animation blocks appear in this program, which are basically timelines of joint values. Choregraphe offers the functionality to record some movement that you create by manually moving limbs, which you can then save as a block.
`speech_dance`
> Pepper will ask you if he/she should dance. You can respond with 'yes' and 'no'.
### Editing blocks
After you have seen some NAOqi examples, you can start editing and coding your own blocks in Choregraphe. To make your own block, you can insert a template block from `Box library > Programming > Templates > Python Script`. You can add more inputs and outputs by pressing the `+`-button in the inspector on the right. You can also create a diagram to create a custom block out of combination of existing blocks. The structure is intuitive, but if you are struggling: remember to have a look at the source code of existing blocks!
### More information
http://doc.aldebaran.com/2-5/software/choregraphe/index.html
<?xml version="1.0" encoding="UTF-8" ?><ChoregrapheProject xmlns="http://www.aldebaran-robotics.com/schema/choregraphe/project.xsd" xar_version="3"><Box name="root" id="-1" localization="8" tooltip="Root box of Choregraphe&apos;s behavior. Highest level possible." x="0" y="0"><bitmap>media/images/box/root.png</bitmap><script language="4"><content><![CDATA[]]></content></script><Input name="onLoad" type="1" type_size="1" nature="0" inner="1" tooltip="Signal sent when diagram is loaded." id="1" /><Input name="onStart" type="1" type_size="1" nature="2" inner="0" tooltip="Box behavior starts when a signal is received on this input." id="2" /><Input name="onStop" type="1" type_size="1" nature="3" inner="0" tooltip="Box behavior stops when a signal is received on this input." id="3" /><Output name="onStopped" type="1" type_size="1" nature="1" inner="0" tooltip="Signal sent when box behavior is finished." id="4" /><Timeline enable="0"><BehaviorLayer name="behavior_layer1"><BehaviorKeyframe name="keyframe1" index="1"><Diagram><Box name="WakeUp" id="2" localization="0" tooltip="Call a Wake Up process.&#x0A;Stiff all joints and apply stand Init posture if the robot is Stand" x="242" y="106"><bitmap>media/images/box/movement/stiffness.png</bitmap><script language="4"><content><![CDATA[class MyClass(GeneratedClass):
def __init__(self):
GeneratedClass.__init__(self)
pass
def onLoad(self):
self.motion = ALProxy("ALMotion")
pass
def onUnload(self):
pass
def onInput_onStart(self):
self.motion.wakeUp()
self.onStopped() #~ activate output of the box
pass
def onInput_onStop(self):
self.onUnload() #~ it is recommended to call onUnload of this box in a onStop method, as the code written in onUnload is used to stop the box as well
pass]]></content></script><Input name="onLoad" type="1" type_size="1" nature="0" inner="1" tooltip="Signal sent when diagram is loaded." id="1" /><Input name="onStart" type="1" type_size="1" nature="2" inner="0" tooltip="Box behavior starts when a signal is received on this input." id="2" /><Input name="onStop" type="1" type_size="1" nature="3" inner="0" tooltip="Box behavior stops when a signal is received on this input." id="3" /><Output name="onStopped" type="1" type_size="1" nature="1" inner="0" tooltip="Signal sent when box behavior is finished." id="4" /><Resource name="All motors" type="Lock" timeout="0" /><Resource name="Stiffness" type="Lock" timeout="0" /></Box><Box name="Basic Awareness" id="4" localization="8" tooltip="This box is an interface to the module ALBasicAwareness.&#x0A;&#x0A;It is a simple way to make the robot establish and keep eye contact with people.&#x0A;&#x0A;V1.1.0" x="408" y="111"><bitmap>media/images/box/tracker/basicawareness.png</bitmap><script language="4"><content><![CDATA[class MyClass(GeneratedClass):
def __init__(self):
GeneratedClass.__init__(self)
def onLoad(self):
#put initialization code here
try:
self.awareness = ALProxy('ALBasicAwareness')
except Exception as e:
self.awareness = None
self.logger.error(e)
self.memory = ALProxy('ALMemory')
self.isRunning = False
self.trackedHuman = -1
import threading
self.subscribingLock = threading.Lock()
self.BIND_PYTHON(self.getName(), "setParameter")
def onUnload(self):
if self.isRunning:
if self.awareness:
self.awareness.stopAwareness()
self.setALMemorySubscription(False)
self.isRunning = False
def onInput_onStart(self):
if self.isRunning:
return # already running, nothing to do
self.isRunning = True
self.trackedHuman = -1
if self.awareness:
self.awareness.setEngagementMode(self.getParameter('Engagement Mode'))
self.awareness.setTrackingMode(self.getParameter('Tracking Mode'))
self.awareness.setStimulusDetectionEnabled('Sound', self.getParameter('Sound Stimulus'))
self.awareness.setStimulusDetectionEnabled('Movement', self.getParameter('Movement Stimulus'))
self.awareness.setStimulusDetectionEnabled('People', self.getParameter('People Stimulus'))
self.awareness.setStimulusDetectionEnabled('Touch', self.getParameter('Touch Stimulus'))
self.setALMemorySubscription(True)
self.awareness.startAwareness()
def onInput_onStop(self):
if not self.isRunning:
return # already stopped, nothing to do
self.onUnload()
self.onStopped()
def setParameter(self, parameterName, newValue):
GeneratedClass.setParameter(self, parameterName, newValue)
if self.awareness:
if parameterName == 'Sound Stimulus':
self.awareness.setStimulusDetectionEnabled('Sound', newValue)
elif parameterName == 'Movement Stimulus':
self.awareness.setStimulusDetectionEnabled('Movement', newValue)
elif parameterName == 'People Stimulus':
self.awareness.setStimulusDetectionEnabled('People', newValue)
elif parameterName == 'Touch Stimulus':
self.awareness.setStimulusDetectionEnabled('Touch', newValue)
# callbacks for ALBasicAwareness events
def onStimulusDetected(self, eventName, stimulusName, subscriberIdentifier):
self.StimulusDetected(stimulusName)
def onHumanTracked(self, eventName, humanID, subscriberIdentifier):
self.trackedHuman = humanID
self.HumanTracked(humanID)
def onHumanLost(self, eventName, subscriberIdentifier):
self.HumanLost(self.trackedHuman)
self.trackedHuman = -1
def setALMemorySubscription(self, subscribe):
self.subscribingLock.acquire()
if subscribe:
self.memory.subscribeToEvent('ALBasicAwareness/StimulusDetected', self.getName(), 'onStimulusDetected')
self.memory.subscribeToEvent('ALBasicAwareness/HumanTracked', self.getName(), 'onHumanTracked')
self.memory.subscribeToEvent('ALBasicAwareness/HumanLost', self.getName(), 'onHumanLost')
else:
self.memory.unsubscribeToEvent('ALBasicAwareness/StimulusDetected', self.getName())
self.memory.unsubscribeToEvent('ALBasicAwareness/HumanTracked', self.getName())
self.memory.unsubscribeToEvent('ALBasicAwareness/HumanLost', self.getName())
self.subscribingLock.release()]]></content></script><Input name="onLoad" type="1" type_size="1" nature="0" inner="1" tooltip="Signal sent when diagram is loaded." id="1" /><Input name="onStart" type="1" type_size="1" nature="2" inner="0" tooltip="Starts the Basic Awareness with the given Engagement and Tracking mode parameters, using the given stimuli." id="2" /><Input name="onStop" type="1" type_size="1" nature="3" inner="0" tooltip="Stops the Basic Awareness." id="3" /><Output name="onStopped" type="1" type_size="1" nature="1" inner="0" tooltip="Signal sent when box behavior is finished." id="4" /><Output name="StimulusDetected" type="3" type_size="1" nature="2" inner="0" tooltip="This output is stimulated when BasicAwareness detects a stimulus amongst the tracked stimulus.&#x0A;&#x0A;The output data is the stimulus&apos; name." id="5" /><Output name="HumanTracked" type="2" type_size="1" nature="2" inner="0" tooltip="This output is triggered when ALBasicAwareness detects a stimulus that is confirmed to be a human.&#x0A;&#x0A;The output data is the ID corresponding to the tracked human. It is shared with PeoplePerception and can be used there. This output is triggered with -1 if ALBasicAwareness tried to detect a human but failed." id="6" /><Output name="HumanLost" type="2" type_size="1" nature="2" inner="0" tooltip="This output is triggered when the human currently tracked is lost.&#x0A;&#x0A; The output data is the ID corresponding to the lost human. It can be reused in PeoplePerception." id="7" /><Parameter name="Engagement Mode" inherits_from_parent="0" content_type="3" value="FullyEngaged" default_value="Unengaged" custom_choice="0" tooltip='The engagement mode specifies how &quot;focused&quot; the robot is on the engaged person.' id="8"><Choice value="Unengaged" /><Choice value="FullyEngaged" /><Choice value="SemiEngaged" /></Parameter><Parameter name="Tracking Mode" inherits_from_parent="0" content_type="3" value="Head" default_value="Head" custom_choice="0" tooltip="The tracking mode describes how the robot keeps eye contact with an engaged person." id="9"><Choice value="Head" /><Choice value="BodyRotation" /><Choice value="WholeBody" /></Parameter><Parameter name="Sound Stimulus" inherits_from_parent="0" content_type="0" value="0" default_value="1" tooltip="" id="10" /><Parameter name="Movement Stimulus" inherits_from_parent="0" content_type="0" value="0" default_value="1" tooltip="" id="11" /><Parameter name="People Stimulus" inherits_from_parent="0" content_type="0" value="1" default_value="1" tooltip="" id="12" /><Parameter name="Touch Stimulus" inherits_from_parent="0" content_type="0" value="0" default_value="1" tooltip="" id="13" /></Box><Box name="Get Age" id="1" localization="8" tooltip="This box returns the age of the person in front of the robot.&#x0A;The detection fails when there are more or less than one person in front of the robot or when the timeout is exceeded.&#x0A;&#x0A;It is possible to set up the Confidence Threshold and the Timeout parameters for this box. " x="592" y="110"><bitmap>media/images/box/interaction/age.png</bitmap><script language="4"><content><![CDATA[class MyClass(GeneratedClass):
def __init__(self):
GeneratedClass.__init__(self)
def onLoad(self):
try:
self.faceC = ALProxy("ALFaceCharacteristics")
except Exception as e:
raise RuntimeError(str(e) + "Make sure you're not connected to a virtual robot." )
self.confidence = self.getParameter("Confidence Threshold")
self.age = 0
self.counter = 0
self.bIsRunning = False
self.delayed = []
self.errorMes = ""
def onUnload(self):
self.counter = 0
self.age = 0
self.bIsRunning = False
self.cancelDelays()
def onInput_onStart(self):
try:
#start timer
import qi
import functools
delay_future = qi.async(self.onTimeout, delay=int(self.getParameter("Timeout (s)") * 1000 * 1000))
self.delayed.append(delay_future)
bound_clean = functools.partial(self.cleanDelay, delay_future)
delay_future.addCallback(bound_clean)
self.bIsRunning = True
while self.bIsRunning:
if self.counter < 4:
try:
#identify user
ids = ALMemory.getData("PeoplePerception/PeopleList")
if len(ids) == 0:
self.errorMes = "No face detected"
self.onUnload()
elif len(ids) > 1:
self.errorMes = "Multiple faces detected"
self.onUnload()
else:
#analyze age properties
self.faceC.analyzeFaceCharacteristics(ids[0])
time.sleep(0.1)
value = ALMemory.getData("PeoplePerception/Person/"+str(ids[0])+"/AgeProperties")
if value[1] > self.confidence:
self.age += value[0]
self.counter += 1
except:
ids = []
else:
#calculate mean value
self.age /= 4
self.onStopped(int(self.age))
self.onUnload()
return
raise RuntimeError(self.errorMes)
except Exception as e:
raise RuntimeError(str(e))
self.onUnload()
def onTimeout(self):
self.errorMes = "Timeout"
self.onUnload()
def cleanDelay(self, fut, fut_ref):
self.delayed.remove(fut)
def cancelDelays(self):
cancel_list = list(self.delayed)
for d in cancel_list:
d.cancel()
def onInput_onStop(self):
self.onUnload()]]></content></script><Input name="onLoad" type="1" type_size="1" nature="0" inner="1" tooltip="Signal sent when diagram is loaded." id="1" /><Input name="onStart" type="1" type_size="1" nature="2" inner="0" tooltip="Box behavior starts when a signal is received on this input." id="2" /><Input name="onStop" type="1" type_size="1" nature="3" inner="0" tooltip="Box behavior stops when a signal is received on this input." id="3" /><Output name="onStopped" type="2" type_size="1" nature="1" inner="0" tooltip="Returns a number between 0 and 75 indicating the age of the person in front of the robot.&#x0A;&#x0A;Tip:&#x0A;Connect this output to If box to compare the age with a defined value" id="4" /><Output name="onError" type="3" type_size="1" nature="1" inner="0" tooltip='Triggered when age detection failed. &#x0A;Possible error messages:&#x0A;- &quot;No face detected&quot;&#x0A;- &quot;Multiple faces detected&quot;&#x0A;- &quot;Timeout&quot;' id="5" /><Parameter name="Confidence Threshold" inherits_from_parent="0" content_type="2" value="0.35" default_value="0.6" min="0" max="1" tooltip="Set the confidence threshold for the age detection." id="6" /><Parameter name="Timeout (s)" inherits_from_parent="0" content_type="2" value="10" default_value="5" min="1" max="60" tooltip="" id="7" /></Box><Box name="Say Text" id="3" localization="8" tooltip="Say the text received on its input." x="787" y="110"><bitmap>media/images/box/interaction/say.png</bitmap><script language="4"><content><![CDATA[import time
class MyClass(GeneratedClass):
def __init__(self):
GeneratedClass.__init__(self, False)
self.tts = ALProxy('ALTextToSpeech')
self.ttsStop = ALProxy('ALTextToSpeech', True) #Create another proxy as wait is blocking if audioout is remote
def onLoad(self):
self.bIsRunning = False
self.ids = []
def onUnload(self):
for id in self.ids:
try:
self.ttsStop.stop(id)
except:
pass
while( self.bIsRunning ):
time.sleep( 0.2 )
def onInput_onStart(self, p):
self.bIsRunning = True
try:
sentence = "\RSPD="+ str( self.getParameter("Speed (%)") ) + "\ "
sentence += "\VCT="+ str( self.getParameter("Voice shaping (%)") ) + "\ "
sentence += "You look like you are %s years old" % str(p)
sentence += "\RST\ "
id = self.tts.post.say(str(sentence))
self.ids.append(id)
self.tts.wait(id, 0)
finally:
try:
self.ids.remove(id)
except:
pass
if( self.ids == [] ):
self.onStopped() # activate output of the box
self.bIsRunning = False
def onInput_onStop(self):
self.onUnload()]]></content></script><Input name="onLoad" type="1" type_size="1" nature="0" inner="1" tooltip="Signal sent when Diagram is loaded." id="1" /><Input name="onStart" type="3" type_size="1" nature="2" inner="0" tooltip="Box behavior starts when a signal is received on this Input." id="2" /><Input name="onStop" type="1" type_size="1" nature="3" inner="0" tooltip="Box behavior stops when a signal is received on this Input." id="3" /><Output name="onStopped" type="1" type_size="1" nature="1" inner="0" tooltip="Signal sent when Box behavior is finished." id="4" /><Parameter name="Voice shaping (%)" inherits_from_parent="1" content_type="1" value="100" default_value="100" min="50" max="150" tooltip='Used to modify at runtime the voice feature (tone, speed). In a slighty&#x0A;different way than pitch and speed, it gives a kind of &quot;gender or age&#x0A;modification&quot; effect.&#x0A;&#x0A;For instance, a quite good male derivation of female voice can be&#x0A;obtained setting this parameter to 78%.&#x0A;&#x0A;Note: For a better effect, you can compensate this parameter with the&#x0A;speed parameter. For example, if you want to decrease by 20% the voice&#x0A;shaping, you will have to increase by 20% the speed to keep a constant&#x0A;average speed.' id="5" /><Parameter name="Speed (%)" inherits_from_parent="1" content_type="1" value="100" default_value="100" min="50" max="200" tooltip="Changes the speed of the voice.&#x0A;&#x0A;Note: For a better effect, you can compensate this parameter with the voice&#x0A;shaping parameter. For example, if you want to increase by 20% the speed, you&#x0A;will have to decrease by 20% the voice shaping to keep a constant average&#x0A;speed." id="6" /><Resource name="Speech" type="Lock" timeout="0" /></Box><Link inputowner="2" indexofinput="2" outputowner="0" indexofoutput="2" /><Link inputowner="4" indexofinput="2" outputowner="2" indexofoutput="4" /><Link inputowner="1" indexofinput="2" outputowner="4" indexofoutput="6" /><Link inputowner="3" indexofinput="2" outputowner="1" indexofoutput="4" /><Link inputowner="0" indexofinput="4" outputowner="3" indexofoutput="4" /></Diagram></BehaviorKeyframe></BehaviorLayer></Timeline></Box></ChoregrapheProject>
\ No newline at end of file
<?xml version="1.0" encoding="UTF-8" ?>
<Package name="get_age" format_version="4">
<Manifest src="manifest.xml" />
<BehaviorDescriptions>
<BehaviorDescription name="behavior" src="behavior_1" xar="behavior.xar" />
</BehaviorDescriptions>
<Dialogs />
<Resources />
<Topics />
<IgnoredPaths />
<Translations auto-fill="en_US">
<Translation name="translation_en_US" src="translations/translation_en_US.ts" language="en_US" />
</Translations>
</Package>
<?xml version='1.0' encoding='UTF-8'?>
<package uuid="get_age-e0a234" version="0.0.0">
<names>
<name lang="en_US">Untitled</name>
</names>
<supportedLanguages>
<language>en_US</language>
</supportedLanguages>
<descriptionLanguages>
<language>en_US</language>
</descriptionLanguages>
<contents>
<behaviorContent path="behavior_1">
<userRequestable/>
<nature>interactive</nature>
<permissions/>
</behaviorContent>
</contents>
</package>
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE TS>
<TS version="2.1" language="en_US"/>
This diff is collapsed.
<?xml version='1.0' encoding='UTF-8'?>
<package uuid="motion-54400a" version="0.0.0">
<names>
<name lang="en_US">Untitled</name>
</names>
<supportedLanguages>
<language>en_US</language>
</supportedLanguages>
<descriptionLanguages>
<language>en_US</language>
</descriptionLanguages>
<contents>
<behaviorContent path="behavior_1">
<userRequestable/>
<nature>interactive</nature>
<permissions/>
</behaviorContent>
</contents>
</package>
<?xml version="1.0" encoding="UTF-8" ?>
<Package name="motion" format_version="4">
<Manifest src="manifest.xml" />
<BehaviorDescriptions>
<BehaviorDescription name="behavior" src="behavior_1" xar="behavior.xar" />
</BehaviorDescriptions>
<Dialogs />
<Resources />
<Topics />
<IgnoredPaths />
<Translations auto-fill="en_US">
<Translation name="translation_en_US" src="translations/translation_en_US.ts" language="en_US" />
</Translations>
</Package>
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE TS>
<TS version="2.1" language="en_US"/>
This diff is collapsed.
<?xml version='1.0' encoding='UTF-8'?>
<package uuid="speech_dance-93f625" version="0.0.0">
<names>
<name lang="en_US">Untitled</name>
</names>
<supportedLanguages>
<language>en_US</language>
</supportedLanguages>
<descriptionLanguages>
<language>en_US</language>
</descriptionLanguages>
<contents>
<behaviorContent path="behavior_1">
<userRequestable/>
<nature>interactive</nature>
<permissions/>
</behaviorContent>
</contents>
</package>
<?xml version="1.0" encoding="UTF-8" ?>
<Package name="speech_dance" format_version="4">
<Manifest src="manifest.xml" />
<BehaviorDescriptions>
<BehaviorDescription name="behavior" src="behavior_1" xar="behavior.xar" />
</BehaviorDescriptions>
<Dialogs />
<Resources>
<File name="surprise3" src="behavior_1/surprise3.ogg" />
</Resources>
<Topics />
<IgnoredPaths />
<Translations auto-fill="en_US">
<Translation name="translation_en_US" src="translations/translation_en_US.ts" language="en_US" />
</Translations>
</Package>
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE TS>
<TS version="2.1" language="en_US">
<context>
<name>behavior_1/behavior.xar:/Say</name>
<message>
<source>Hello</source>
<comment>Text</comment>
<translation type="vanished">Hello</translation>
</message>
<message>
<location filename="behavior_1/behavior.xar" line="0"/>
<source>Would you like me to dance?</source>
<comment>Text</comment>
<translation type="unfinished">Would you like me to dance?</translation>
</message>
</context>
<context>
<name>behavior_1/behavior.xar:/Say (1)</name>
<message>
<source>Hello</source>
<comment>Text</comment>
<translation type="vanished">Hello</translation>
</message>
<message>
<location filename="behavior_1/behavior.xar" line="0"/>
<source>Sorry, I could not understand you. Could you repeat your answer?</source>
<comment>Text</comment>
<translation type="unfinished">Sorry, I could not understand you. Could you repeat your answer?</translation>
</message>
</context>
<context>
<name>behavior_1/behavior.xar:/Say (2)</name>
<message>
<source>Oke, I will dance.</source>
<comment>Text</comment>
<translation type="obsolete">Oke, I will dance.</translation>
</message>
<message>
<source>Good, I will dance.</source>
<comment>Text</comment>
<translation type="obsolete">Good, I will dance.</translation>
</message>
<message>
<location filename="behavior_1/behavior.xar" line="0"/>
<source>Nice</source>
<comment>Text</comment>
<translation type="unfinished">Nice</translation>
</message>
</context>
<context>
<name>behavior_1/behavior.xar:/Say (3)</name>
<message>
<source>Oke, I will not dance.</source>
<comment>Text</comment>
<translation type="obsolete">Oke, I will not dance.</translation>
</message>
<message>
<source>Sad, I will not dance.</source>
<comment>Text</comment>
<translation type="obsolete">Sad, I will not dance.</translation>
</message>
<message>
<source>Then I will not dance.</source>
<comment>Text</comment>
<translation type="obsolete">Then I will not dance.</translation>
</message>
<message>
<location filename="behavior_1/behavior.xar" line="0"/>
<source>Okay</source>
<comment>Text</comment>
<translation type="unfinished">Okay</translation>
</message>
</context>
<context>
<name>behavior_1/behavior.xar:/Say (4)</name>
<message>
<location filename="behavior_1/behavior.xar" line="0"/>
<source>That's not good</source>
<comment>Text</comment>
<translation type="unfinished">That's not good</translation>
</message>
</context>
</TS>
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment