Michael S. Brown

Email Address
dcsmsb@nus.edu.sg


Organizational Units
Organizational Unit
COMPUTING
faculty
Organizational Unit

Publication Search Results

Now showing 1 - 10 of 55
  • Publication
    High quality depth map upsampling for 3D-TOF cameras
    (2011) Park, J.; Kim, H.; Yu-Wing Tai; Brown, M.S.; Kweon, I.; COMPUTER SCIENCE
    This paper describes an application framework to perform high quality upsampling on depth maps captured from a low-resolution and noisy 3D time-of-flight (3D-ToF) camera that has been coupled with a high-resolution RGB camera. Our framework is inspired by recent work that uses nonlocal means filtering to regularize depth maps in order to maintain fine detail and structure. Our framework extends this regularization with an additional edge weighting scheme based on several image features based on the additional high-resolution RGB input. Quantitative and qualitative results show that our method outperforms existing approaches for 3D-ToF upsampling. We describe the complete process for this system, including device calibration, scene warping for input alignment, and even how the results can be further processed using simple user markup. © 2011 IEEE.
  • Publication
    Design of macro-filter-lens with simultaneous chromatic and geometric aberration correction
    (2014-01-01) Prasad, D.K.; Brown, M.S.; COMPUTER SCIENCE
    A macro-filter-lens design that can correct for chromatic and geometric aberrations simultaneously while providing for a long focal length is presented. The filter is easy to fabricate since it involves two spherical surfaces and a planar surface. Chromatic aberration correction is achieved by making all the rays travel the same optical distance inside the filter element (negative meniscus). Geometric aberration is corrected for by the lens element (plano-convex), which makes the output rays parallel to the optic axis. This macro-filter-lens design does not need additional macro lenses and it provides an inexpensive and optically good (aberration compensated) solution for macro imaging of objects not placed close to the camera. © 2014 Optical Society of America.
  • Publication
    Illuminant estimation for color constancy: Why spatial-domain methods work and the role of the color distribution
    (2014-01-05) Cheng, D.; Prasad, D.K.; Brown, M.S.; COMPUTER SCIENCE
    Color constancy is a well-studied topic in color vision. Methods are generally categorized as (1) low-level statistical methods, (2) gamut-based methods, and (3) learning-based methods. In this work, we distinguish methods depending on whether they work directly from color values (i.e., color domain) or from values obtained from the image's spatial information (e.g., image gradients/frequencies). We show that spatial information does not provide any additional information that cannot be obtained directly from the color distribution and that the indirect aim of spatial-domain methods is to obtain large color differences for estimating the illumination direction. This finding allows us to develop a simple and efficient illumination estimation method that chooses bright and dark pixels using a projection distance in the color distribution and then applies principal component analysis to estimate the illumination direction. Our method gives state-of-the-art results on existing public color constancy datasets as well as on our newly collected dataset (NUS dataset) containing 1736 images from eight different highend consumer cameras. © 2014 Optical Society of America.
  • Publication
    Color-aware regularization for gradient domain image manipulation
    (2013) Deng, F.; Kim, S.J.; Tai, Y.-W.; Brown, M.S.; COMPUTER SCIENCE
    We propose a color-aware regularization for use with gradient domain image manipulation to avoid color shift artifacts. Our work is motivated by the observation that colors of objects in natural images typically follow distinct distributions in the color space. Conventional regularization methods ignore these distributions which can lead to undesirable colors appearing in the final output. Our approach uses an anisotropic Mahalanobis distance to control output colors to better fit original distributions. Our color-aware regularization is simple, easy to implement, and does not introduce significant computational overhead. To demonstrate the effectiveness of our method, we show the results with and without our color-aware regularization on three gradient domain tasks: gradient transfer, gradient boosting, and saliency sharpening. © 2013 Springer-Verlag.
  • Publication
    In defence of RANSAC for outlier rejection in deformable registration
    (2012) Tran, Q.-H.; Chin, T.-J.; Carneiro, G.; Brown, M.S.; Suter, D.; COMPUTER SCIENCE
    This paper concerns the robust estimation of non-rigid deformations from feature correspondences. We advance the surprising view that for many realistic physical deformations, the error of the mismatches (outliers) usually dwarfs the effects of the curvature of the manifold on which the correct matches (inliers) lie, to the extent that one can tightly enclose the manifold within the error bounds of a low-dimensional hyperplane for accurate outlier rejection. This justifies a simple RANSAC-driven deformable registration technique that is at least as accurate as other methods based on the optimisation of fully deformable models. We support our ideas with comprehensive experiments on synthetic and real data typical of the deformations examined in the literature. © 2012 Springer-Verlag.
  • Publication
    Nonlinear camera response functions and image deblurring
    (2012) Kim, S.; Tai, Y.-W.; Kim, S.J.; Brown, M.S.; Matsushita, Y.; COMPUTER SCIENCE
    This paper investigates the role that nonlinear camera response functions (CRFs) have on image deblurring. In particular, we show how nonlinear CRFs can cause a spatially invariant blur to behave as a spatially varying blur. This can result in noticeable ringing artifacts when deconvolution is applied even with a known point spread function (PSF). In addition, we show how CRFs can adversely affect PSF estimation algorithms in the case of blind deconvolution. To help counter these effects, we introduce two methods to estimate the CRF directly from one or more blurred images when the PSF is known or unknown. While not as accurate as conventional CRF estimation algorithms based on multiple exposures or calibration patterns, our approach is still quite effective in improving deblurring results in situations where the CRF is unknown. © 2012 IEEE.
  • Publication
    RAW Image Reconstruction Using a Self-contained sRGB–JPEG Image with Small Memory Overhead
    (Springer New York LLC, 2018) Nguyen R.M.H.; Brown M.S.; DEPARTMENT OF COMPUTER SCIENCE
    Most camera images are saved as 8-bit standard RGB (sRGB) compressed JPEGs. Even when JPEG compression is set to its highest quality, the encoded sRGB image has been significantly processed in terms of color and tone manipulation. This makes sRGB–JPEG images undesirable for many computer vision tasks that assume a direct relationship between pixel values and incoming light. For such applications, the RAW image format is preferred, as RAW represents a minimally processed, sensor-specific RGB image that is linear with respect to scene radiance. The drawback with RAW images, however, is that they require large amounts of storage and are not well-supported by many imaging applications. To address this issue, we present a method to encode the necessary data within an sRGB–JPEG image to reconstruct a high-quality RAW image. Our approach requires no calibration of the camera’s colorimetric properties and can reconstruct the original RAW to within 0.5% error with a small memory overhead for the additional data (e.g., 128 KB). More importantly, our output is a fully self-contained 100% compliant sRGB–JPEG file that can be used as-is, not affecting any existing image workflow—the RAW image data can be extracted when needed, or ignored otherwise. We detail our approach and show its effectiveness against competing strategies. © 2017, The Author(s).
  • Publication
    A new in-camera imaging model for color computer vision and its application
    (2012) Kim, S.J.; Lin, H.T.; Lu, Z.; Süsstrunk, S.; Lin, S.; Brown, M.S.; COMPUTER SCIENCE
    We present a study of in-camera image processing through an extensive analysis of more than 10,000 images from over 30 cameras. The goal of this work is to investigate if image values can be transformed to physically meaningful values, and if so, when and how this can be done. From our analysis, we found a major limitation of the imaging model employed in conventional radiometric calibration methods and propose a new in-camera imaging model that fits well with today's cameras. With the new model, we present associated calibration procedures that allow us to convert sRGB images back to their original CCD RAW responses in a manner that is significantly more accurate than any existing methods. Additionally, we show how this new imaging model can be used to build an image correction application that converts an sRGB input image captured with the wrong camera settings to an sRGB output image that would have been recorded under the correct settings of a specific camera. © 2012 IEEE.
  • Publication
    Automatic corresponding control points selection for historical document image registration
    (2009) Wang, J.; Brown, M.S.; Tan, C.L.; COMPUTER SCIENCE
    Image registration is crucial for various image analysis tasks. In particular, most approaches to correction of bleed-through distortion on handwritten document images require the recto image and the verso image to be precisely registered. In this paper, we present a fully automatic method which detects specific number of corresponding control points from historical documents for the purpose of registration. First, candidate points are located by inspecting the gradient direction maps of document images. Corresponding control points are selected based on a dissimilarity metric that incorporates image intensity, gradient magnitude, gradient orientation and displacement. To improve the quality of the detected control points, median filers and consistency checking are applied to correct mismatches. Experiments on real historical document images have shown encouraging results and further improvements can be made by exploiting more sophisticated similarity metric tailored to historical documents' characteristics. © 2009 IEEE.
  • Publication
    Texture amendment: Reducing texture distortion in constrained parameterization
    (2008) Tai, Y.-W.; Brown, M.S.; Tang, C.-K.; Shum, H.-Y.; COMPUTER SCIENCE
    Constrained parameterization is an effective way to establish texture coordinates between a 3D surface and an existing image or photograph. A known drawback to constrained parameterization is visual distortion that arises when the 3D geometry is mismatched to highly textured image regions. This paper introduces an approach to reduce visual distortion by expanding image regions via texture synthesis to better fit the 3D geometry. The result is a new amended texture that maintains the essence of the input texture image but exhibits significantly less distortion when mapped onto the 3D model. © 2008 ACM.