thumbnail image
broken image
broken image
broken image

 

  • Home
  • Core Technology 
    • Computational Optics Platform
  • Product 
    • BM3D Denoising IP Core
    • Okulo C1
    • Okulo P1
    • Okulo A1
    • Vidu SDK
  • Resources 
    • Computational Imaging Courses
    • Download Center
  • About Us 
    • Introduction
    • Careers
    • Contact Us
  • …  
    • Home
    • Core Technology 
      • Computational Optics Platform
    • Product 
      • BM3D Denoising IP Core
      • Okulo C1
      • Okulo P1
      • Okulo A1
      • Vidu SDK
    • Resources 
      • Computational Imaging Courses
      • Download Center
    • About Us 
      • Introduction
      • Careers
      • Contact Us
    Contact
    broken image
    broken image
    broken image

     

    • Home
    • Core Technology 
      • Computational Optics Platform
    • Product 
      • BM3D Denoising IP Core
      • Okulo C1
      • Okulo P1
      • Okulo A1
      • Vidu SDK
    • Resources 
      • Computational Imaging Courses
      • Download Center
    • About Us 
      • Introduction
      • Careers
      • Contact Us
    • …  
      • Home
      • Core Technology 
        • Computational Optics Platform
      • Product 
        • BM3D Denoising IP Core
        • Okulo C1
        • Okulo P1
        • Okulo A1
        • Vidu SDK
      • Resources 
        • Computational Imaging Courses
        • Download Center
      • About Us 
        • Introduction
        • Careers
        • Contact Us
      Contact
      broken image
      • About Us

        Point Spread Technology is committed to revolutionizing the fields of computational photography and computational optics with the world's leading computational imaging technology, promoting the revolutionary advancement of consumer electronics, automotive, and industrial imaging, leading optical design into the era of automatic optimization, and the era of joint automatic optimization of optics and ISP.

        Point Spread has strong R&D capabilities and application production teams. The team members hold a series of patents, and have published hundreds of top-tier journal and conference papers. The team has tremendous technical advantages in optics, computational imaging, computer vision, embedded system, robotic control and mechanical design.

      • Qilin Sun

        broken image

        CVPR

        Supplementary Material: End-to-End Complex Lens Design with Differentiable Ray Tracing

        QILIN SUN, King Abdullah University of Science and Technology, Saudi Arabia, Point Spread Technology, China

        CONGLI WANG, King Abdullah University of Science and Technology, Saudi Arabia

        QIANG FU, King Abdullah University of Science and Technology, Saudi Arabia

        XIONG DUN, Point Spread Technology, China, Tongji University, China

        WOLFGANG HEIDRICH, King Abdullah University of Science and Technology, Saudi Arabia

        broken image

        SIGGRAPH

        Learning Rank-1 Diffractive Optics for

        Single-shot High Dynamic Range Imaging

        Qilin Sun1 Ethan Tseng2 Qiang Fu1 Wolfgang Heidrich1 Felix Heide2 1KAUST 2Princeton University

        Abstract:

        High-dynamic-range (HDR) imaging is an essential

        imaging modality for a wide range of applications in uncontrolled environments, including autonomous driving,

        robotics, and mobile phone cameras. However, exist�ing HDR techniques in commodity devices struggle with

        dynamic scenes due to multi-shot acquisition and post-processing time, e.g. mobile phone burst photography, making such approaches unsuitable for real-time applications.

        In this work, we propose a method for snapshot HDR imag�ing by learning an optical HDR encoding in a single image

        which maps saturated highlights into neighboring unsatu-rated areas using a diffractive optical element (DOE). We

        propose a novel rank-1 parameterization of the DOE which

        drastically reduces the optical search space while allowing

        us to efficiently encode high-frequency detail. We propose a

        reconstruction network tailored to this rank-1 parametrization for the recovery of clipped information from the encoded measurements. The proposed end-to-end framework

        is validated through simulation and real-world experiments

        and improves the PSNR by more than 7 dB over state-of-the-art end-to-end designs. 

        broken image

        PhD Thesis

        End-to-end Optics Design for Computational Cameras

        Qilin Sun

        Abstract:

        Imaging systems have long been designed in separated steps: the experience-driven optical design followed by sophisticated image processing. Such a general-propose approach achieves success in the past but left the question open for specific tasks and the best compromise between optics and post-processing, as well as minimizing costs. Driven by this, a series of works are proposed to bring the imaging system design into end-to-end fashion step by step, from joint optics design, point spread function (PSF) optimization, phase map optimization to a general end-to-end complex lens camera. 

        Wenbo Bao

        broken image

        IEEE

        MEMC-Net: Motion Estimation and Motion

        Compensation Driven Neural Network for

        Video Interpolation and Enhancement

        Wenbo Bao, Wei-Sheng Lai, Xiaoyun Zhang, Zhiyong Gao, and Ming-Hsuan Yang

        Abstract

        Motion estimation (ME) and motion compensation (MC) have been widely used for classical video frame interpolation systems over the past decades. Recently, a number of data-driven frame interpolation methods based on convolutional neural networks have been proposed. However, existing learning based methods typically estimate either flow or compensation kernels, thereby limiting performance on both computational efficiency and interpolation accuracy. In this work, we propose a motion estimation and compensation driven neural network for video frame interpolation. A novel adaptive warping layer is developed to integrate both optical flow and interpolation kernels to synthesize target frame pixels. This layer is fully differentiable such that both the flow and kernel estimation networks can be optimized jointly. The proposed model benefits from the advantages of motion estimation and compensation methods without using hand-crafted features. Compared to existing methods, our approach is computationally efficient and able to generate more visually appealing results. Furthermore, the proposed MEMC-Net architecture can be seamlessly adapted to several video enhancement tasks, e.g., super-resolution, denoising, and deblocking.

        broken image

        IEEE

        High-Order Model and Dynamic Filtering

        for Frame Rate Up-Conversion

        Wenbo Bao , Student Member, IEEE, Xiaoyun Zhang, Member, IEEE,

        Li Chen, Member, IEEE, Lianghui Ding, and Zhiyong Gao

        Abstract:

        This paper proposes a novel frame rate-up conversion method through high-order model and dynamic filtering (HOMDF) for video pixels. Unlike the constant brightness and linear motion assumptions in traditional methods, the intensity and position of the video pixels are both modeled with high-order polynomials in terms of time. Then, the key problem of our method is to estimate the polynomial coefficients that represent the pixel’s intensity variation, velocity, and acceleration. We propose to solve it with two energy objectives: one minimizes the auto-regressive prediction error of intensity variation by its past samples, and the other minimizes the video frame’s reconstruction error along the motion trajectory. To efficiently address the optimization problem for these coefficients, we propose a dynamic filtering solution inspired by the video’s temporal coherence. The optimal estimation of these coefficients is reformulated into a dynamic fusion of the prior estimate from the pixel’s temporal predecessor and the maximum likelihood estimate from the current new observation. Finally, frame rate up-conversion is implemented using motion-compensated interpolation by pixel-wise intensity variation and motion trajectory.

        broken image

        PHD Thesis

        RESEARCH ON RECURSIVE MODELS AND DEEP LEARNING METHODS FOR VIDEO FRAME RATE UP-CONVERSION

        In recent years, with the development of high-quality display devices, the demand on high-quality video sources, including spatially and tempo�rally high-resolution data, has been much more urgent than ever. However, limited by the high computational cost and bandwidth consumption dur�ing video acquisition, compression, and transmission processes, a practical solution is to transform existing low-quality videos into high-quality ones through digital signal processing technology. Among the research on video quality enhancement, super-resolving videos in the temporal domain, namely video frame rate up-conversion, is the most challenging task and also the

        fundamental approach of delivering immersive visual experiences to users. Specifically, video frame up-conversion aims to interpolate additional tran�sitional frames between the original low-frame-rate (such as 30Hz) videos to obtain high-frame-rate (such as 60Hz or even 120Hz) ones. The interpolated frames make the object movements in the videos more exquisite and the transition of frame contents more smooth, thus significantly improving the visual quality for users.

      • Core Technology

        Computational Optics Platform

        Product Center

        Okulo C1

        Okulo P1

        Vidu SDK

        BM3D Denoising IP Core

        Resources

        Download Center

        Computational Imaging Course

        About Us

        Introduction

        Careers

        Contact Us

      Copyright © 2020 - 2024 PointSpread. All Rights Reserved. 点昀技术(南通)有限公司 版权所有

      苏公网安备 32060102320784号
      |
      苏ICP备2022041483号-2
        pointspread
        Pointspread Technology is committed to revolutionizing the field of computational photography and computational optics with the world's leading computational imaging technology. We are committed to revolutionizing the field of computational photography and computational optics with the world's leading computational imaging technology, driving revolutionary advances in consumer electronics, automotive, and industrial imaging. Pointspread Technology introduces the latest RGBD camera, Okulo P1, which supports up to 120 fps aligned color point cloud output, ultra-low latency, and industrial grade IP67 protection.
        https://user-assets.sxlcdn.com/images/1016220/Fh20mgywRnTUMSfSGqp3tmdybRwq.png?imageMogr2/strip/auto-orient/thumbnail/1200x630>/format/png
        Cookie Use
        We use cookies to ensure a smooth browsing experience. By continuing we assume you accept the use of cookies.
        Learn More