Loong Fah Cheong

Email Address
loongfah@nus.edu.sg


Organizational Units
Organizational Unit
ENGINEERING
faculty
Organizational Unit
Organizational Unit
COLLEGE OF DESIGN & ENG
faculty

Publication Search Results

Now showing 1 - 10 of 70
  • Publication
    Apparent distortion of the frontoparallel plane from wide-field motion parallax
    (2001) Cornilleau-Peres, V.; Tai, L.C.; Cheong, L.-F.; ELECTRICAL & COMPUTER ENGINEERING
    We studied the visual perception of large frontoparallel planes from monocular motion parallax, during self-motion and object motion. The isodistortion framework predicts that a frontoparallel plane should be perceived as convex in the motion direction for near viewing distances, with decreasing convexity as the viewing distance increases. The stimuli represented dotted planes or dihedrals with a vertical or horizontal curvature. In condition SM (self-motion) the subject translated his/her head laterally. This movement was recorded and used to generate images simulating the presence of stationary surfaces. In condition OM (object-motion), the subject was stationary, and his recorded motion was applied to rotate the surfaces in depth, reproducing the SM optic flow obtained if gaze stabilization was perfect. Image size was 70 deg, and viewing distances were 0.5m or 4 m. Subjects indicated whether the surfaces were concave or convex in the horizontal or vertical direction, the surfaces being planes and horizontally (respectively vertically) curved dihedrals. A few depth reversals occurred at large dihedral angles in condition OM. The frontoparallel plane was perceived as convex in conditions SM and OM in both directions (horizontal and vertical). The thresholds of the response curves were significantly negative, indicating that the AFP (apparent frontoparallel plane) was always a concave surface. The results were similar for conditions SM and OM. The slope of the psychometric curves tended to be higher for the vertically curved dihedrals, as compared to the horizontally curved. Motion parallax yields an apparent distortion of large frontoparallel planes. This distortion is similar during self-motion and object-motion, and may not be due to the integration of non-visual signals related to self-motion. The similarity of the distortion in the vertical and horizontal directions questions the iso-distortion model, as well as the spin variation model.
  • Publication
    Multi-view repetitive structure detection
    (2011) Jiang, N.; Tan, P.; Cheong, L.-F.; ELECTRICAL & COMPUTER ENGINEERING
    Symmetry, especially repetitive structures in architecture are universally demonstrated across countries and cultures. Existing detection methods mainly focus on the detection of planar patterns from a single image. It is difficult to apply them to detect repetitive structures in architecture, which abounds with non-planar 3D repetitive elements (such as balconies and windows) and curved surfaces. We study the repetitive structure detection problem from multiple images of such architecture. Our method jointly analyzes these images and a set of 3D points reconstructed from them by structure-from-motion algorithms. 3D points help to rectify geometric deformations and hypothesize possible lattice structures, while images provide denser color and texture information to evaluate and confirm these hypotheses. In the experiments, we compare our method with existing algorithm. We also show how our results might be used to assist image-based modeling. © 2011 IEEE.
  • Publication
    Error characteristics of SFM with erroneous focal length
    (2006) Cheong, L.-F.; Xiang, X.; ELECTRICAL & COMPUTER ENGINEERING
    This paper presents a theoretical analysis of the behavior of "Structure from Motion" (SFM) algorithms with respect to the errors in intrinsic parameters of the camera. We demonstrate both analytically and in simulation how uncertainty in the calibration parameters gets propagated to motion estimates. We studied the behavior of the estimation of the focus of expansion (FOE) in the case that the camera is well calibrated except that the focal length is estimated with error. The results suggest that the behavior of the bas-relief ambiguity is affected by the erroneous focal length. The amount of influence depends on the relative direction of the translation and rotation parameters of the camera, the field of view and scene depth. Simulation with synthetic data was conducted to support our findings. © Springer-Verlag Berlin Heidelberg 2006.
  • Publication
    Synergizing spatial and temporal texture
    (2002-10) Peh, C.-H.; Cheong, L.-F.; ELECTRICAL & COMPUTER ENGINEERING
    Temporal texture accounts for a large proportion of motion commonly experienced in the visual world. Current temporal texture techniques extract primarily motion-based features for recognition. We propose in this paper a representation where both the spatial and the temporal aspects of texture are coupled together. Such a representation has the advantages of improving efficiency as well as retaining both spatial and temporal semantics. Flow measurements form the basis of our representation. The magnitudes and directions of the normal flow are mapped as spatiotemporal textures. These textures are then aggregated over time and are subsequently analyzed by classical texture analysis tools. Such aggregation traces the history of a motion which can be useful in the understanding of motion types. By providing a spatiotemporal analysis, our approach gains several advantages over previous implementations. The strength of our approach was demonstrated in a series of experiments, including classification and comparisons with other algorithms.
  • Publication
    Absolute distance perception during in-depth head movement: Calibrating optic flow with extra-retinal information
    (2002) Peh, C.-H.; Panerai, F.; Droulez, J.; Cornilleau-Pérès, V.; Cheong, L.-F.; ELECTRICAL & COMPUTER ENGINEERING
    We investigated the ability of monocular human observer to scale absolute distance during sagittal head motion in the presence of pure optic flow information. Subjects were presented at eye-level computer-generated spheres (covered with randomly distributed dots) placed at several distances. We compared the condition of self-motion (SM) versus object-motion (OM) using equivalent optic flow field. When the amplitude of head movement was relatively constant, subjects estimated absolute distance rather accurately in both the SM and OM conditions. However, when the amplitude changed on a trial-to-trial basis, subjects' performance deteriorated only in the OM condition. We found that distance judgment in OM condition correlated strongly with optic flow divergence, and that non-visual cues served as important factors for scaling distances in SM condition. Absolute distance also seemed to be better scaled with sagittal head movement when compared with lateral head translation. © 2002 Elsevier Science Ltd. All rights reserved.
  • Publication
    On the distortion of shape recovery from motion
    (2004-09-01) Xiang, T.; Cheong, L.-F.; ELECTRICAL & COMPUTER ENGINEERING
    Given that most current structure from motion (SFM) algorithms cannot recover true motion estimates reliably, it is important to understand the impact such motion errors have on the shape reconstruction. In this paper, various robustness issues surrounding different types of second order shape estimates recovered from motion cue are addressed. We present a theoretical model to understand the impact that errors in the motion estimates have on shape recovery. Using this model, we focus on the recovery of second order shape under different generic motions, each of these motions presenting different degrees of error sensitivity. We also show that different shapes exhibit different degrees of robustness with respect to its recovery. Understanding such different distortion behavior is important if we want to design better fusion strategy with other shape cues. © 2004 Elsevier B.V. All rights reserved.
  • Publication
    Symmetric architecture modeling with a single image
    (2009) Jiang, N.; Tan, P.; Cheong, L.-F.; ELECTRICAL & COMPUTER ENGINEERING
    We present a method to recover a 3D texture-mapped architecture model from a single image. Both single image based modeling and architecture modeling are challenging problems. We handle these difficulties by employing constraints derived from shape symmetries, which are prevalent in architecture. We first present a novel algorithm to calibrate the camera from a single image by exploiting symmetry. Then a set of 3D points is recovered according to the calibration and the underlying symmetry. With these reconstructed points, the user interactively marks out components of the architecture structure, whose shapes and positions are automatically determined according to the 3D points. Lastly, we texture the 3D model according to the input image, and we enhance the texture quality at those foreshortened and occluded regions according to their symmetric counterparts. The modeling process requires only a few minutes interaction. Multiple examples are provided to demonstrate the presented method. © 2009 ACM.
  • Publication
    Fall detection and alert for ageing-at-home of elderly
    (2009) Yu, X.; Wang, X.; Kittipanya-Ngam, P.; Eng, H.L.; Cheong, L.-F.; ELECTRICAL & COMPUTER ENGINEERING
    Fall detection has been an active research problem as fall detection technology is critical for the ageing-at-home of the elderly and it can enhance life safety of the elderly and boost their confidence of ageing-at-home by immediately alerting fall occurrence to care givers. This paper presents an algorithm of fall detection for the ageing-at-home of the elderly. This algorithm detects fall events by identifying (human) shape state change pattern reflecting a fall incident from video recorded by a single fixed camera. The novelty of the algorithm is multiple. First, it detects fall occurrence by identifying the state change pattern. Second, it uses the camera projection matrix in its computing. Thus, it eliminates camera setting-related learning. Lastly, it adds constraints to state change pattern to reduce false alarms. Experiments show that the proposed algorithm has a promising performance. © 2009 Springer Berlin Heidelberg.
  • Publication
    Block-sparse RPCA for consistent foreground detection
    (2012) Gao, Z.; Cheong, L.-F.; Shan, M.; ELECTRICAL & COMPUTER ENGINEERING
    Recent evaluation of representative background subtraction techniques demonstrated the drawbacks of these methods, with hardly any approach being able to reach more than 50% precision at recall level higher than 90%. Challenges in realistic environment include illumination change causing complex intensity variation, background motions (trees, waves, etc.) whose magnitude can be greater than the foreground, poor image quality under low light, camouflage etc. Existing methods often handle only part of these challenges; we address all these challenges in a unified framework which makes little specific assumption of the background. We regard the observed image sequence as being made up of the sum of a low-rank background matrix and a sparse outlier matrix and solve the decomposition using the Robust Principal Component Analysis method. We dynamically estimate the support of the foreground regions via a motion saliency estimation step, so as to impose spatial coherence on these regions. Unlike smoothness constraint such as MRF, our method is able to obtain crisply defined foreground regions, and in general, handles large dynamic background motion much better. Extensive experiments on benchmark and additional challenging datasets demonstrate that our method significantly outperforms the state-of-the-art approaches and works effectively on a wide range of complex scenarios. © 2012 Springer-Verlag.
  • Publication
    Simultaneous camera pose and correspondence estimation with motion coherence
    (2012-01) Lin, W.-Y.; Cheong, L.-F.; Tan, P.; Dong, G.; Liu, S.; ELECTRICAL & COMPUTER ENGINEERING
    Traditionally, the camera pose recovery problem has been formulated as one of estimating the optimal camera pose given a set of point correspondences. This critically depends on the accuracy of the point correspondences and would have problems in dealing with ambiguous features such as edge contours and high visual clutter. Joint estimation of camera pose and correspondence attempts to improve performance by explicitly acknowledging the chicken and egg nature of the pose and correspondence problem. However, such joint approaches for the two-view problem are still few and even then, they face problems when scenes contain largely edge cues with few corners, due to the fact that epipolar geometry only provides a "soft" point to line constraint. Viewed from the perspective of point set registration, the point matching process can be regarded as the registration of points while preserving their relative positions (i.e. preserving scene coherence). By demanding that the point set should be transformed coherently across views, this framework leverages on higher level perceptual information such as the shape of the contour. While thus potentially allowing registration of non-unique edge points, the registration framework in its traditional form is subject to substantial point localization error and is thus not suitable for estimating camera pose. In this paper, we introduce an algorithm which jointly estimates camera pose and correspondence within a point set registration framework based on motion coherence, with the camera pose helping to localize the edge registration, while the "ambiguous" edge information helps to guide camera pose computation. The algorithm can compute camera pose over large displacements and by utilizing the non-unique edge points can recover camera pose from what were previously regarded as feature-impoverished SfM scenes. Our algorithm is also sufficiently flexible to incorporate high dimensional feature descriptors and works well on traditional SfM scenes with adequate numbers of unique corners. © 2011 Springer Science+Business Media, LLC.