[[:start|Home]]/[[:project|Project]]/[[:planning|Planning]]/[[:development|Development]] [[:deliverables|Deliverables]]/[[:conclusions|Conclusions]]/[[:resources|Bibliography]]/[[:contacts|Contacts]]

A PCRE internal error occured. This might be caused by a faulty plugin

====== Progress Reports ====== ===== February 17th - March 2nd ===== ==== Progress: ==== * Finished and delivered the final report for the course //Preparação da Dissertação do Mestrado Integrado em Engenharia Eletrotécnica e de Computadores//; * Analysis and configuration of the tools provided at Fraunhofer AICOS; * Started the development of the application: Implemented the reception of the incoming data from the smartphone's integrated sensors and, also, a test tone in order to confirm a connection with a headset; * Studied the A2DP Bluetooth profile, which should be implemented in a wireless Bluetooth headset to be acquired, and it was found to be adequate for this project's purpose; * Integrated the following libraries into the application: [[http://developer.echonest.com/client_libraries.html|jEN]], a port to Java of the EchoNest API; a port of [[http://www.cs.waikato.ac.nz/ml/weka/|WEKA]] for the Android OS; [[https://github.com/libpd|libpd]], more specifically, its port for the Android OS and, finally, the library provided by Fraunhofer in order to process the smartphone's sensors incoming data. This implied learning more about the [[http://www.gradle.org/|Gradle]] building tool, in order to define the libraries dependencies, especially in the case of libpd. * Configured the SVN repository to be used, which was initially causing problems but after restructuring the folder where the application's being developed they were gone; * Attended a presentation, in Fraunhofer AICOS at February 27th, of other projects being developed in the research center where this project is also inserted. ==== Difficulties: ==== * Unable to implement libpd's functionalities into the application, tried to use Android NDK but later assessed that it wasn't a solution to this problem; ===== March 3rd - March 16th ===== ==== Progress: ==== * Developed this website; * Defined an additional feature for the application, which will be the creation of personalized profiles for each patient in order to specifiy the most apt sound producing modules for each case; * After discussion with the co-supervisor at Fraunhofer AICOS, it was decided to use a stereo headset. This led to studying and choosing the most apt Bluetooth stereo headset to be used during the tests with patients; * Fixed the problem of not being able to invoke the libpd's functionalities to the application. The problem was that Gradle doesn't, as of the writing of this sentence, currently support JNI. To solve this, it was necessary to modify an instruction in a gradle file; * Installed and used WEKA with some public datasets and tested several types of feature selectors and classifiers; * Improved the application through the update of its layout and several optimizations in order to maximize the application's performance; * Found out about [[http://wekinator.cs.princeton.edu/|Wekinator]], a tool that might be quite useful to perform a bridge between the Sensor and Sound modules in the application, but the tool in question requires the use of [[http://chuck.cs.princeton.edu/|ChucK]], an alternative to [[http://puredata.info/|Pure Data]]; * Studied the provided documentation and data used in the REMPARK project, which led to a better understanding of the constraints involved with the system to be developed; * Started implementing the symptom diagnosis module, with the definition of the conditions on which the sensor module should activate the sound module; * Started to define a test protocol to be used on the evaluation of the application with patients; * Investigated the feasibility of using Quarternion Fourier Transforms, from the calculated quarternions with the provided signal processing library developed by Fraunhofer AICOS, in order to perform a frequency analysis of the incoming data from the sensors; ==== Difficulties: ==== * No relevant difficulties; ===== March 17th - March 23rd ===== Absent due to academic reasons. ===== March 24th - April 6th ===== ==== Progress: ==== * Implemented Step Detector, Step Length Estimator and a rudimentary version of a calculator that outputs the cadence associated with the human gait. * Now storing the nº of detected steps, their timestamps and their respective (estimated) distance in a .csv file. * Researched more about a possible connection between auditory cues and gait in PWP, which led to finding out that the modelization of the human gait through a 1/f fluctuation is legitimate and is well founded. ==== Difficulties: ==== * Analyzed further how to improve the audio performance through the study of the native libpd's code (in C) and tried to find a way to make the initialization of the OpenSL ES be done with the native (audio) buffer size of the Nexus S, but wasn't able to do it. ===== April 7th - April 20th ===== ==== Progress: ==== * Discussed with the co-supervisor the ongoing development of the project and we agreed that the week of the 11th of April would be to premature to test the application and, also due to the fact that it would still take time to schedule the tests with PWP given their availability, the tests will most likely place either in the last week of April or the first week of May, which means that perhaps there would be no time to develop the music recommendation module. * The analysis of whether the music recommendation module would still be worth it allowed, indirectly, to define two main ways on which to provide inputs to the sound module. The first one, which is the one currently in development, is through the use of parameters associated with the human gait and the second would be based on the analysis of the fractal pattern that occurs, in a long term analysis, during human movement. Given the bigger computational effort associated with the last one, the focus will still remain on the first one. It was also found that, according to the theory of optimal movement variability, an optimal gait variability consists in a balance between predictability and complexity and a restoration to it could be done with RAS with a varying temporal structure. Thus the most apt type of music to be synthesized would have to be algorithmic music based on fractals. * The current state of the application is a podometer, that outputs a sound signal when a step occurs, that differentiates left from right steps, estimates the length of each step, the cadence and extracts the RMS of the patient's vertical accelerations in 5 second windows overlapped between each other by 50%. In parallel to this, the application stores, in real-time, the incoming raw data from the accelerometer, gyroscope and magnetometer, the number of detected steps and their respective parameters and the values of the RMS of the vertical accelerations during each time window. * With the acquired data in the smartphone during 10 step normal walks, which is stored in .csv files, the data concerning accelerometry is analyzed in order to validate the efficiency of the Step Detector and Step Length Estimator and to modify the code when undesirable and wrong results are encountered. This is a cycle which is being constantly repeated during the development of this application. * In order to evaluate the performance of the application, a tool named systrace, which is included with the Android SDK Tools, shows, in a graphical way, the computational speed of the application's methods and threads in execution, which allows to catch the bottlenecks. After locating problems, changes are made in the application's code in order to eliminate those problems. * A rudimentary version of the metronome module was implemented. ==== Difficulties: ==== * Checked the sampling frequency of the accelerometer inside the Nexus S through a method and it seems that it wasn't set at 40 Hz as expected, but actually at around 48 Hz. Therefore the sampling frequency is set to 50 Hz, which led to some parameters and size of some buffers to be adapted for the sensor module. ===== April 21st - May 4th ===== ==== Progress: ==== * Now it's possible to create profiles for each user with their own preferences and the application's service can be turned on and off with the press of a button in the main screen. * A new empirical method for the estimation of the step length, which was devised by Jim Scarlett from Analog Devices, was found. In several articles it was usually, for the waist position, the one that yielded a smaller error rate, both before and after calibration. Thus, the application will use this method. * Performed tests with two volunteers in order to evaluate the new empirical method for the step length estimator through regular walking and on treadmill with a pre-determined baseline cadence and at -20% and +20% variations of this cadence. These tests were also used to determine if the correct side of the foot contacts were detected, after modifying that part of the code as a result of a revision of a part of the state of the art. Afterwards, the acquired data was analyzed in MATLAB and, from it, defined which filters to use in order to eliminate unwanted and irrelevant frequencies in the analysis of the human movement. * Implemented calibration of the Step Length Estimator, enhanced even further the Step Length Estimator, tested this enhancement and the obtained results are better compared to the previously used empirical method. Also, the protocol for the Step Length Estimator's training runs was defined. The latest mean error rate was around 5%, which is acceptable. * Implemented a threshold based approach to detect Bradykinesia through the RMS of the vertical accelerations along the human body and the calculation and update, from sample window to sample window, of its standard deviation and mean. To test this, volunteers walked and simulated Parkinsonian gait and Bradykinesia. The detection was mostly correct. * Showcased the current results to the co-advisor and I was advised that, for the first tests with PWP to take place starting at the May 5th, the application shouldn't play any kind of sound (in order not to entrain the patients), to store all incoming raw data from the sensors and to correct the calculation of the cadence, changes which were done shortly afterwards. ==== Difficulties: ==== * Checked the actual sampling frequency from the gyroscope and magnetometer in the Nexus S and they have a sampling frequency of, respectively, 107.24 Hz and 49.3 Hz. Attempts were made to lower the gyroscope's sampling frequency but they proved impossible, so the application's sampling frequency is still 50 Hz. * The ordered bluetooth headphones arrived and they were tested with the Nexus S and a noticeable delay between the foot contact and the sound cue was detected. Afterwards they were tested with a Galaxy Nexus and the delay between a foot contact and its sound cue was around 250 ms, which is acceptable for this project. Thus, in a further stage of this project, the development will center around the Galaxy Nexus. ===== May 5th - May 18th ===== ==== Progress: ==== * Defined the test protocol to be followed during the 1st stage of tests with 5 PWP in Oporto; * A port of the application's sensor module in standard Java, in order to use a regular computer to perform an offline analysis on the gathered raw incoming data from the smartphone's sensors, was developed. This port will be useful to analyze the gathered data during the tests with the PWP; * Fixed issues related with shutting down the application's service; * Performed the 1st stage of tests with 5 PWP in Oporto; * Labelled and sorted the sensor data acquired during the 1st stage of tests; ==== Difficulties: ==== * Due to the experimental conditions during the tests, some of the sensor data had to be discarded. This was a process that involved comparing the annotations taken during the tests and comparing them with the gathered accelerometry. ===== May 19th - June 1st ===== ==== Progress: ==== * A posterior analysis of the gathered sensor data from the 1st stage of tests was performed. It was found that the step length estimator, when uncalibrated, performed a bad estimation during the tasks in which one of the PWP was in the "OFF" motor state. While when the step length estimator is calibrated the results are better, the deviation of the estimation from the real value is too large. A possible future work is to implement a step length estimator which performs better for these particular cases. * Also with the gathered data, the algorithm used for the step detector was optimized in order to increase the successful detection rate of the steps; * Fixed performance and stability issues with the application; * Corrected the calculation of the walking cadence * The calculation of the swing time duration, due to the difficulty of detecting toe-offs and heel-strikes in the gathered accelerations during the tests with the PWP, involves an approximation. This approximation had to be done in order for the sound syntesis submodule to be controlled; * Developed the sound module, more specifically the metronome, sound synthesis and music synthesis submodules; * Added the gait learning mode in the sound module, in order to allow it to work even if bradykinesia is not detected; * Defined the test protocols for the 2nd stage of tests that will be done in Lisbon and Oporto and prepared all the material to take in order to perform these tests; ==== Difficulties: ==== * No relevant difficulties; ===== June 2nd - June 15th ===== ==== Progress: ==== * Tests in Lisbon and Oporto were done with 10 PWP in order to assess the performance of the application's sound module, where sensor and video data were gathered simultaneously. * Afterwards all the video and sensor data was labelled, matched and sorted. The validation of the relevant data for the analysis was partially done. * According to a suggestion from one of the physical therapists during the tests, a pace-fixing mechanism was implemented in the application afterwards. The effects of this mechanism in the PWP is a possible future work to be done. ==== Difficulties: ==== * The process of sorting, matching and labeling and discarding irrelevant data is a very time consuming process. An entire week was necessary in order to this. ===== June 16th - June 29th ===== ==== Progress: ==== * An analysis of the previously processed data was done. It was necessary to count all the steps in all the videos, match them again with the processed data, validate the processed data in order to get data that can be properly analyzed (i.e. if the PWP during a task strayed from the track at a certain time, discard that data for the purposes of analysis). Afterwards, spatio-temporal gait parameters had to be calculated, both for the real and estimated values, for all the performed tasks during the tests. In total, 7384 steps, or 66 minutes of PWP walking, was gathered and analyzed. This process, along with an analysis of the performance of the sensor module, took an entire week. * Writing dissertation; ==== Difficulties: ==== * Due to the overwhelming amount of data and the diverse experimental conditions in which these were gathered, the analysis of this data was very difficult. It was possible to assess the performance of the sensor module, but a thorough analysis of the sound module wasn't possible due to time constraints. ====== Meeting Minutes ====== ===== February 21st ===== This meeting took place in Fraunhofer AICOS with the co-supervisor. The outcome was to continue the development of the application's skeleton which started on the previous, to study the possibility of using a wireless headset with the application and that a library, developed internally at Fraunhofer, which allows the processing of the incoming data from the smartphone's sensors would be provided; ===== March 12th ===== This meeting took place in Fraunhofer AICOS with the co-supervisor. In it, the current state of the aplication was discussed and the co-supervisor provided some of the documentation and data used in the REMPARK project in order for it to be analyzed and then using it to help in the development of the application, more specifically the implementation of the sensor module. Also, it was decided to delay the conclusion of the implementation of the sensor module to until April 11th, which is a delay that will be compensated by fusing [[planning|tasks 12 and 13]], which aren't essential, into a single week. ===== April 17th ===== This meeting took place in Fraunhofer AICOS with the co-supervisor. The main result was the definition of the test protocol to be followed during the tests of the sensor module of the application. These tests will start at the 5th of May, they will involve 6 patients during different days and, according to the availability of the patients, these tests will take place for an indefinite period of time. The current state of the application was also showcased, which is, for now, working as a pedometer that outputs a sound signal whenever a step occurs and different sounds if the detected step was either with the left or right foot. The application also estimates the step length associated with each step, the cadence of the gait and extracts the RMS of the vertical accelerations in windows with 256 samples, with a overlap of 50%. ===== May 13th ===== This meeting took place in Fraunhofer AICOS with the co-supervisor. The main result was the revision of the test protocol to be followed during the tests of the sensor module of the application and performing a checklist of the material to be used, since these tests were delayed, due to scheduling issues, to May 14th to until the May 16th; ===== May 26th ===== This meeting took place in Fraunhofer AICOS with the co-supervisor. The main result was the definition of the test protocol to be followed during the tests of the sound module of the application and performing a checklist of the material to be used. These tests, which will happen between the 2nd and 6th of June, will be done in Lisbon and Oporto.