All posts tagged “MIL

comment 0

Understanding Video and Imaging Equipment

Where to Start?

“I would like to use video, but where do I start?” We receive
hundreds of calls each year from people wanting to use video in a
never-ending variety of applications. Over the years, we have found the
best way to help our customers is to have them determine four basic
system factors:


Figure 1: Fundamental Parameters of an
Imaging System

  1. Field Of View (FOV):
    The area of the object
    that will be viewed on the monitor.
  2. Resolution:
    The smallest resolvable feature of
    the object.
  3. Minimum and Maximum Working Distances:
    Measurement
    from objective lens to the object.
  4. Depth Of Field Requirement:
    Maximum object
    depth needed in focus.

 
FEATURES OF IMAGE QUALITY PARAMETERS COMPONENTS AFFECTING THE PARAMETERS
Resolution: Resolution is a measurement of the
imaging system’s ability to reproduce object detail

  • Lens
  • Monitor
  • Camera
  • Capture Board

Contrast: Comparison of the shades of gray in
the image that define an object and its background.

  • Camera
  • Capture Board
  • Illumination

Depth of Field (DOF): The difference between
the closest and farthest distances an object may be shifted before an
unacceptable blur is observed. Sometimes called Depth of Focus, DOF
should not be confused with working distance. This is also applicable to
objects with depth.

  • Aperture of Lens (f/#)

Distortion: Distortion is an optical error
(aberration) in the lens that causes a difference in magnification at
different points within the image.

  • Lens

Perspective Errors: Also called parallax, this
is a phenomenon in conventional lenses which causes a change in
magnification as it moves in and out from best focus. Closer objects
appear larger than objects further away. Telecentric lenses optically
correct for this occurrence.

  • Lens

Terminology Used in Electronic Imaging

Sensor Size:

The size of a camera sensor’s active area, typically specified in the
horizontal dimension. This parameter is important in determining the
primary magnification (PMAG) required to obtain a desired field of view.
Note: Most analog cameras have a 4:3 (H:V) dimensional aspect ratio.

Primary Magnification:

Defined as the ratio between the sensor size and the FOV, PMAG is the
“work” done by the lens.

System Magnification:

The total magnification from the object to the image on the monitor,
this is the “work” done by the whole system. It is the product of the
PMAG and the camera-to-monitor magnification (the ratio between the
monitor size and the sensor size).

Resolution:

Resolution is a measurement of the imaging system’s ability to
reproduce object detail. Figure 2 illustrates a simplified view of two
squares imaged onto pixels of a CCD camera. In order for two objects to
be resolvable, a certain amount of space between the objects is needed.

F-Number (f/#):

A measure of the light gathering ability of a lens. f/# increases as
the lens aperture is closed, see Figure 3. Closing the aperture can also
increase the system’s depth of field (see definition below) but
illumination should be increased to compensate for this.

C/CS-Mount:

These are both threaded mounts (1-32 TPI) found on most industrial
CCD cameras and lenses. The flange (back focal) distance to the sensor
is 17.52mm for C-Mount. CS-Mount has a flange distance of 12.52mm.
C-Mount lenses and CS-Mount cameras are compatible using a 5mm
spacer (#03-618)
.

Depth Of Field:

The maximum object depth that can be maintained entirely in focus.
DOF is also the amount of object movement (in and out from best focus)
allowable while maintaining a desired amount of focus. As the object is
placed closer or farther than the working distance, it goes out of focus
and both the resolution and contrast suffer. For this reason, DOF only
makes sense if it is defined with an associated resolution and contrast.
In general, DOF can be increased by closing the lens aperture (i.e.
increasing the f/#), however, the illumination must be increased
accordingly.

Figure 1:

Figure 2: Exaggerated example in
which a pair of red squares are (a) not resolved and are (b) resolved.

Figure 3:

(see also Imaging Case Study)

reference :: http://www.edmundoptics.com/technical-support/imaging/understanding-video-and-imaging-equipment/

Filed under: Uncategorized
comment 0

Accessing a MIL buffer directly

If needed, a MIL buffer’s contents can be accessed directly. For instance, if you want to calculate the average value of the pixels of your image, you could create a custom algorithm. The algorithm could be applied directly to the buffer without having to copy the contents of the MIL buffer into a user-allocated array (MbufAlloc…()) by using MbufGet() and MbufPut(). To do so would be more efficient and might improve the performance of the custom algorithm.

In order to access the MIL buffer directly, the buffer’s address and pitch must be known. Once you know this, you will be able to access them directly for optimum performance.


사용자 삽입 이미지

The address of a parent or child buffer can be returned using MbufInquire(). Selecting M_HOST_ADDRESS will return a logical address, while M_PHYSICAL_ADDRESS will return a physical address. In either case, the first address of the buffer you are specifying will be the top left-most pixel in the image. Knowing the pitch and the depth of the buffer will tell you the address of the following row.


The pitch of a buffer is the number of units between the beginnings of any two adjacent lines of the buffer’s data and can be measured in pixels or bytes. Note that in some instances, the pitch in bytes will be more accurate than in pixels. If the last pixel falls outside of a 32-bit boundary (required by Windows), the start of the next row will be located at the beginning of the next 32-bit boundary; this process is called internal padding. When measuring the pitch in pixels, the padding can be counted as “extra” pixels, depending on the depth of the pixels. This will result in an inaccurate pitch.

comment 0

MilImage를 화일로 저장하거나 불러오는 방법

MIL에서 기본으로 사용하는 Image Format은 *.mim형태의
tiff형식의 화일입니다.
따라서 MbufLoad와 MbufSave를 다음과 같이 사용하시면,
mim화일로 저장 또는 불러오기가 가능합니다.

MbufLoad(“load.mim”, MilImageLoaded);
MbufSave(“save.mim”, MilImageSaved);

그러나 일반적으로 사용하는 포맷인 jpeg이나 tiff, raw등
의 이미지화일포맷은 다음의 함수를 사용하여 저장 또는 불러
오기가 가능합니다.

MbufImport(“sample.jpg”, M_JPEG_LOSSY, M_RESTORE,
MilSystem, &MilImage);
MbufExport(“sample.tif”, M_TIFF, MilImage);

그리고 Image의 sequence를 avi파일로도 저장이 가능한데, 이
때는 MbufImportSequence, MbufExportSequence함수를 이용하시
면 됩니다.

comments 2

Visual Studio에서 Mil사용을 위한 tip

1. MIL help 이용방법

– MIL을 설치하신 후에 메뉴>Tools>Customize에 들어가셔서
Add-ins and Macro Files 탭을 보시면 다음과 같은 항목이
추가되는 걸 확인하실 수 있습니다.
.Matrox ActiveMIL VC++ Add-In
.Matrox MIL VC++ Add-In
여기서 Matrox MIL VC++ Add-In을 추가하시면 Visual C++
을 사용하시다가 MIL함수에 대한 도움말을 보실 때, 함수
명위에 커서를 위치시키고, Ctrl+F1을 누르시면 바로 MIL
help를 보실 수 있습니다.


2. LINK : fatal error LNK1104: cannot open
file “mil.lib”
(or error LNK2001: unresolved external symbol)

프로그래밍을 조금만 해보신 분이라면 아시겠지만, 위 에러
는 컴파일할 때 mil library를 못찾은 까닭입니다.
이 때는 메뉴>Tools>Options에서 Directories탭에 가셔서
library files의 디렉토리로 MIL library파일이 있는 경로
를 추가해주시면 됩니다.

일반적인 경우 경로는 다음과 같습니다.
C:\PROGRAM FILES\MATROX IMAGING\MIL\LIBRARY
\WINNT\MSC\DLL

Include file도 똑같이 해주시면 되고, 경로는 다음과 같습
니다.
C:\PROGRAM FILES\MATROX IMAGING\MIL\INCLUDE

(주의)이미 만들어진 소스를 가지고 작업을 하시게 될 경우에
도 위의 에러를 보시는 경우가 있을 것입니다. 이것은 기존
workspace에 설정돼있는 lib화일의 경로와 현재 사용하시는 컴
퓨터에서의 경로가 다르기 때문인데, 이때는 위의 directories
설정을 제대로 해주셔도 위와 같은 link error가 발생합니다.
이때는 workspace view에 있는 *.lib화일을 삭제하시고 다시
컴파일하시면 됩니다.

comment 0

Matrox Imaging Library(MIL) – 시스템구조

 

2. MIL 소프트웨어 시스템 구조


 


2.1 MIL/MIL-Lite Modules


– Application , System , Buffer, Display, Digitizer, Graphics context, Threading으로 구성되어 있습니다. MFC에서 new delete가 존재하듯이 각각의 모듈들은 생성 및 소멸 되어야 합니다.


 


각 모듈에 대해서 간단하게 알아봅시다. ^^


 


2.1.1 Application 모듈.


– Mapp() 함수들은 Application환경을 제어합니다.


-> 리소스(Resources) 초기화, 에러 메시지 및 사용자 정의 함수들을 제어할 수 있다.


)  MappAlloc()


 


2.1.2 System 모듈


시스템 모듈의 모든 함수는 Msys()로 시작합니다.(MsysAlloc, MsysControl, MsysFree, MsysGetHookInfo, MsysHookFunction, MsysInquire).


예를들어, MsysAlloc() 함수들은 통신채널과 하드웨어 리소스들을 개방하며, 데이터전송 수행, 시리얼 인터페이스(UARTs)를 제어합니다.


 


2.1.3 Buffer 모듈


– Mbuf()는 버퍼를 설정하기 위한 함수입니다.


예를들어 흑백 및 컬러데이터를 저장하기 위해서는 용도에 맞는 버퍼를 할당 해야 한다.


) 흑백: MbufAlloc2d() , 컬러: MbufAllocColor()


 


2.1.4 Display 모듈


– Mdisp() 함수는 버퍼안에 있는 데이터를 어떤 Display에 보여줄 것인지를 정의 할수있습니다.


버퍼를 선택할 수 있도록 하는 함수는 MdispSelect() 등이 있다.


 


2.1.5 Digitizer 모듈


– Mdig() 함수는 Image Video를 제어합니다. 이 함수는 DCF(Digitizer Configuration Format)이 필요합니다.


설정후에는 MdigGrab()함수를 이용해 Grab Image 버퍼에 저장할 수 있습니다.


 


 


2.1.6 MIL Graphic Context


– Mgra() 함수는 MFC에서 DC와 비슷하다고 생각하시면 됩니다. ^^


) MgraRect(), MgraArc(), MgraRectFill(), MgraText(), MgraArcFill(), MgraLines(), MgraLine(), MgraDot(), MgraDots(), MgraFill(), MgraClear().


 


2.1.7 MIL Architecture hierarchy


– Alloc() Free()는 순서대로 하시면 됩니다.


Alloc Application-> System-> Digitizer->Buffer->Display


Free Alloc과는 반대로 진행하시면 됩니다. ^^


 


(최소한, Application, System, Buffer는 할당을 해주셔야 한다는 점은 유의하셔야 합니다.)


 


모든 모듈에는 M_ 으로 시작합니다. MIL에서 사용하는 변수는 MIL_ID 형식으로 사용할 수있습니다.(MIL_ID == long)


 


2.1.8 MIL 에서 Default 사용법


 


MappAllocDefault() 함수는 MilConfig에 설정되어있는, System, Display, Digitizer, Buffer를 이용하여 구성합니다. (변경을 위해서는 MilConfig툴에서 변경하셔야합니다.)


 


) MappAllocDefault(ApplicationID, SystemID, DisplayID, DigId, ImageBufId)