15 Jun 2016
Top-Hat filter based object proposal generation:
As the classical CNNs like Alexnet, Caffenet, VGG16 give a classification oputput. A method is required to find object proposals in the scene and give it to CNN for classification. I have used bilateral and top-hat filter to generate object proposals. The code uses filter implementations in openCV.
The pipeline is as follows:
-
Load an image
src = imread( file );
-
Bilateral Filtering: edge preserving color and spatial filter.
bilateralFilter(src, blt, 8, 40, 5, 1 );
-
Apply the specified morphology operation (tophat).
Mat element = getStructuringElement( morph_elem, Size( … 2morph_size + 1, 2morph_size+1 ), Point( morph_size, morph_size ) );
morphologyEx( blt, tophat, operation, element);
-
Thresholding to create binay image.
cvtColor(tophat, gray, CV_BGR2GRAY);
bw = threshval < 128 ? (gray > threshval) : (gray < threshval);
-
Labeling each connected segment.
int nLabels = connectedComponents(bw, labelImage, 8);
-
Fitting bounding-box to segments.
Harit Pandya
11 Jun 2016
Working with Caffe CPP API:
This tutorial explains the Caffe classification example. Implementations are present in /examples/cpp_classification.
Required inputs:
- Model file: These is the file describing the layer configuration and its hyperparameters.
- trained_file: These is the file containing learned weights.
- mean_file: It is good idea to subtract the mean of all images in dataset. It essentially does background subtraction.
- label_file: CNN gives prediction probability for all the categories on which it was trained. The label file gives mapping between category id and class name.
Pipeline:
- set device(CPU/GPU)
- Load the network
net_.reset(new Net(model_file, TEST));
net_->CopyTrainedLayersFrom(trained_file);
- Load the binaryproto mean file.
SetMean(mean_file);
- Set input and output layer (Note: only single input and output layer supported.)
Blob* input_layer = net_->input_blobs()[0];
Blob* output_layer = net_->output_blobs()[0];
- Compute prediction for given test image.
std::vector predictions = classifier.Classify(image);
The prediction structure has two components. Prediction.first gives label and Prediction.second gives corresponding score.
The prediction method pipeline is given as follows:
-
Change the dimension of input layer
input_layer->Reshape(1, num_channels_,
input_geometry_.height, input_geometry_.width);
-
Forward dimension change to all layers.
- Wrap the input layer of the network in separate cv::Mat objects (one per channel). This way we save one memcpy operation and ww don’t need to rely on cudaMemcpy2D. The last preprocessing operation will write the separate channels directly to the input layer.
WrapInputLayer(&input_channels);
-
Convert the input image to the input image format of the network and write the separate BGR planes directly to the input layer of the network.
Preprocess(img, &input_channels);
-
Run forward pass for the network.
net_->ForwardPrefilled();
-
Copy the output layer to a std::vector
Blob* output_layer = net_->output_blobs()[0];
const float* begin = output_layer->cpu_data();
const float* end = begin + output_layer->channels();
return std::vector(begin, end);
Harit Pandya
11 Jun 2016
This is the second post in the series of post pertaining to the project “Automatic the uploading of binary files using git-annex”. This post aims to show the progress made until the mid term evaluation.
We planned to write abstraction over git-annex to add the files to remote storage server. Git-annex provides support for external special remote, it follows a messaging based service as prescribed here External Special Remote Protocol needs to be followed.
But we found some tools git-annex-remote-rclone, git-annex-remote-hubic, dropboxannex, which were already present , so we decided to test them, debug them and change them according to our requirement. we tried to install and test all of them, and all of them were stuck with some or the other kind of error.
In the case of droppboxannex, git init was taking a lot of time, opened issue. Similarly, for the other two as well ther some errors. We decided to focus our energy on one of them and found that git-annex-remote-rclone was the best out of all of them, as it can support all the remote storage spaces which are supported by rclone, which includes a lot of them from dropbox, hubic to Yandex Disk. So we decided to debug it.
It was showing an error "Remote origin not usable by git-annex; setting annex-ignore"
. We tried to pinpoint the error, but it looked like the error was occuring out of git-annex, somehow git-anex was not able to use remote rclone, after a lot of struggling, it came to our notice that the message DIRHASH-LOWER Key
was First supported by git-annex version 6.20160511
and since we were using the official version supported by ubunutu version. So after cloning the git-annex repo and builiding and installing from the source we were able to run the git-annex-remote-rclone.
Now the tool works and all, but as already present in the problem statment, we need to build a tool which can further ease the process, so if we can create a little bit of more abstraction it will work best. So we decided to built a little bit of more abstraction according to following steps:
Installing rlcone [Once only]
1. Install rclone into your $PATH, e.g. /usr/local/bin
2. Copy git-annex-remote-rclone into your $PATH
3. Configure an rclone remote: rclone config
Using git annex remote
1. git annex init
2. git annex initremote <remote_name> type=external externaltype=rclone target=<rclone_target_name> prefix=git-annex chunk=50MiB encryption=shared mac=HMACSHA512 rclone_layout=lower
3. git annex add <binary_files>
4. git commit -am <message> <binary_files>
5. git push -u origin master
6. git annex sync --content
7. git annex copy <binary_files> --to <remote_name>
Cloning repos
1. git annex clone <repo address>
2. cd <cloned_repo_folder_name>
3. git annex sync
4. git annex enableremote <remote_name>
5. git annex get <binary_files> --from <remote_name>
The abstraction over the above steps will be as follows:
Install Robocomp
Installing rclone [once]
1. rclone config
Using abstract tool
1. git_tool initremote <remote_name> type=external externaltype=rclone target=<rclone_target_name> prefix=git-annex chunk=50MiB encryption=shared mac=HMACSHA512 rclone_layout=lower
2. git_tool add <binary_files> <remote_name>
Cloning repos
1. git annex clone <repo address>
2. cd <cloned_repo_folder_name>
3. git_tool get <binary_files> <remote_name>
Swapnil Sharma
05 Jun 2016
Basic coding
Now that i am decided about the different features about the rcmaster. Pablo suggested about using something similer to how ros 2.0 is implementing node discovery. They are planning to use an external middle-ware named DDS. DDS uses an custom lightweight discovery protocol for finding other nodes. It eliminates the need for an central node keeping registry of all nodes. But after some discussion with other community members we decided to stick with rcmaster as using DDS will negate robocomp’s middle-ware independent policy and also it doesn’t give much benefits compared to the complexity in implementation.
Also we discussed about different ways to implement the multi robot scenario. Multi Robot scenarios can be implemented in 2 ways. one, we could have only one robot running rcmaster and all other robots will be configured to use this rcmaster. in This case the whole load will be into this rcmaster. This doesn’t require any extra coding in the rcmaster. we will only need to change the environment variable which points to the rcmaster in all robots. But the downside is there may be heavy latency as there is only one rcmaster and all the lookups and registrations are RPC’s. Other downside is that if that one Robot which is hosting the rcmaster fails or gets disconnected, then all other robots will fail. The 2nd solution is to let all robots have local rcmasters and they will need to sync their registry with all other robots in th network. But in this case the issue is how will the robots find each other. One straight forward solution is to hard-code all the robots ip in each of the robots. But this may be really tedious. Hence we may use some discovery protocol like udp multicast for finding other rcmasters.
Also this week i wrote a basic interface for the rcmaster. Both the idsl and and the slice file. and had a discussion about them with community members. After making a few changes suggested. I will begin with rest of implementation.
Yash Sanap