Compressed Sensing

I had the oppor­tu­nity to study Com­pressed Sens­ing for my Under­grad­u­ate the­sis under K.V.S Hari and Chan­dra Sekhar See­la­man­tula at the Indian Insti­tute of Science.

The Shannon-Nyquist the­o­rem has been the cor­ner­stone of Infor­ma­tion The­ory for over 80 years. It states that a sig­nal must be sam­pled at least at twice the rate of its max­i­mum fre­quency con­tent in order to recon­struct it with­out loss. In accor­dance with this the­ory, all mod­ern dig­i­tal sys­tems acquire data at this rate and then com­press it to save space and band­width. Recently, a new sampling/sensing par­a­digm called Com­pressed Sens­ing was devel­oped which states that cer­tain sig­nals with spe­cific struc­tures can be sam­pled at a rate much lower than that stip­u­lated by the Nyquist the­o­rem, and these ‘under-sampled’ sig­nals can be recov­ered sat­is­fac­to­rily using lin­ear pro­gram­ming techniques. 

 The main aim of this project was to study Com­pressed Sens­ing (CS) and its recon­struc­tion algo­rithms, and apply this knowl­edge to real world sig­nals like Opti­cal Coher­ence Tomog­ra­phy (OCT) and images. The project is roughly divided into three parts. In the first part, a few CS algo­rithms are imple­mented in Mat­lab and tested on arti­fi­cial data. In the sec­ond, CS algo­rithms are imple­mented on OCT and image data. The last part con­sists of results of CS algo­rithms applied on OCT data using a  spe­cial ‘spar­si­fy­ing’ basis and its com­par­i­son with dif­fer­ent algorithms.

You can find a copy of the report here.

There are, how­ever, some results that were obtained after the report was sub­mit­ted for review. Read on…

<Fill>

A comparison of the reconstruction performance of various sparsifying bases

A com­par­i­son of the recon­struc­tion per­for­mance of var­i­ous spar­si­fy­ing bases

Extend­ing pre­vi­ous work in the field, we used a win­dowed cosine func­tion as a spar­si­fy­ing basis, and applied and com­pared var­i­ous ‘greedy’ algo­rithms to recover the image, pic­tured in the fol­low­ing mon­tage of an onion peel.

A comparison on the performance of various greedy algorithms on an onion peel specimen

A com­par­i­son on the per­for­mance of var­i­ous greedy algo­rithms on an onion peel specimen

In sum­mary, we dis­carded 80% of the data in an OCT scan ran­domly, and recon­structed it with insignif­i­cant error and less noise than the cur­rent stan­dard recon­struc­tion tech­nique. It could result in cheaper, faster, and more effi­cient OCT scans in the future.

 

Tags: , , , , ,