Tuesday, June 9, 2015

How to make an FPGA Partially Reconfigurable

Partially reconfiguration means changing the FPGA partially or only the selected part of the FPGA is reconfigured. It can be done in two ways:

i)    Module based partial reconfiguration
Partial reconfiguration on the basis of difference

Module based partial reconfiguration:


In module based reconfiguration we reconfigure on the specific module or only by changing a selected module we can do the partially reconfiguration of our FPGA. The portions of the design to be reconfigured are known as reconfigurable modules. Specific properties and specific layout criteria must be met with respect to a reconfigurable module, FPGA design intending to use partial reconfiguration must be planned and laid out with that in mind.

Partial reconfiguration on the basis of difference:


Partial reconfiguration on the basis of difference is a method of making small changes in an FPGA design, such as changing I/O standards, LUT equations, and block RAM content.

Applications:


i)    To lessen power or making the design power-efficient.
ii)    Through the JTRS Program, SDRs are becoming a reality for the defense industries as an effective and necessary tool for communication. SDRs assure the JTRS standard by having both a software-reprogrammable operating environment and the ability to support multiple channels and networks simultaneously.
iii)    Partial reconfiguration is useful in a variety of applications across many industries. The aerospace and defense industries have certainly taken advantage of its capabilities.
iv)    Increased system performance. Although a portion of the design is being reconfigured, the rest of the system can keep on to operate. There is no loss of performance or functionality with unaffected portions of a design.
v)    Hardware sharing. Because partial reconfiguration allows you to run multiple applications on a single FPGA, hardware sharing is realized. Benefits include reduced device count, reduced power consumption, smaller boards, and in particular lower costs.

key steps used in Xilinx to make the FPGA partially Reconfigured:


1: First Create Processor Hardware System.
2: Then Create Software Project.
3: After then Create a Plan ahead Project.
4: Defining a Reconfigurable Partition.
5: Adding Reconfigurable Modules.
6: significant the Reconfigurable Partition Region.
7: Running the Design Rule Checker.
 8: Then Create the First Configuration, Implementing, and Promoting.
9: Then Create Other Configurations, and Implementing.
10: Then Run Partial Reconfiguration to Verify Utility.
11: Generating Bit Files.
12: Creating an Image, and Testing.

Advantages:


There are many advantages to make the FPGA reconfigured. only some from them are following:

I)    To make the device more efficient.
II)    To lessen the LUTs of the design by replacing only specific portion.
III)    To lessen power of the design by replacing only specific portion.
IV)    To lessen the delay of the design by replacing only specific portion or a specific module.




                                                                                                                   
:

Monday, June 1, 2015

Is It The Right Path To Pursue PhD Research?



PhD research is the highest level of academic research conducted in universities and institutes throughout the World. The methods used to conduct Ph.D. research are sound and the data that results from the research must be unique. There are many techniques has been created such as scientific methodologies have been created to ensure the soundness and uniqueness of doctoral research and to secure continuity in research results. The most often employed methodologies are quantitative methods, qualitative methods, comparative methods and clinical trials.

The world is producing a large number of PhD holders and their number is increasing every year. Therefore in this era how can a normal PhD holder can survive? World’s most population countries India and China producing more than 40,000 PhD every year. Most of the PhD has completed their research in a short duration which results in the quality of the graduates is not consistent. And Most of the PhD remains unemployed.




In our research whenever we think of doing the research we always think of  doing something innovative  which can change the world but when we come for real implementation of the idea we found only few things have done in our PhD research. The few things which done usually during PhD seems to be unique. What should we do? And, how to do? It depends on one’s circumstances but the research should contribute something new in the specific domain. 

Silicon Mentor is a place where many new researchers are working together to make something innovative and unique. These young researchers help the research students to pursue their research with some of the unique methodologies. Silicon Mentor have Expertise in different Domain such as Computer vision , Biomedical Research, Digital Signal Processing, Machine Learning , Low Power VLSI , Mixed Signal VLSI , FPGA implementation.

Thursday, May 21, 2015

Road and Lane Detection: Different Scenarios and Models

Advanced Driver Assistance Systems are an integral part of vehicles today. They can be passive, as in merely alerting the driver in case of emergencies, or actively respond by taking over vehicle controls during emergency scenarios. Such systems are expected to reach full autonomy during the next decade. The two major fields of interests in the problem are: road and lane perception, and obstacle perception. The former involves finding out road and lane markers, to ensure that vehicle position is correct, and to prevent any departures. Obstacle detection is necessary to prevent collisions with other traffic, or real-life artifacts like streetlights, stray animals, pedestrians, etc.

Problem Scope


Road and lane perception include detecting the extent of the road, the number and position of lanes, merging and splitting lanes, over different scenarios like urban, highway or cross-country. While the problem seems trivial given recent advancements in image processing and feature detection algorithms, the problem is complicated by the presence of several challenges, such as:

•    Case diversity: Due a verity of real-world parameters, the system has to be tolerant of a huge diversity of incoming parameters. These include:
  1.     Lane and Road appearance: Color, texture and width of lanes. Road color, width and curvature differences.
  2.     Image clarity: Presence of other vehicle, shadows cast by objects, sudden changes in illumination.
  3.     Visibility conditions: Wet roads, presence of fog or rain, night-time conditions.
•    High reliability demands: In order to be useful and acceptable, the assistance system should achieve very low error rates. A high rate of false positives will lead to driver irritation and rejection, while false negatives will cause system compromise and low reliability.

Modalities Used


The state-of-the-art research and commercial systems are looking at several perception modalities s sensors. A quick view at their operation and pros-cons is presented here:

1.    Vision: Perhaps the most intuitive approach is to use vision based systems, as lane and road markers are already optimized for human vision detection. Use of front-mounted cameras is nearly standard approach in almost all systems, and it can be argued that since most of the signature of lane marks is in the visual domain, no detection system can totally ignore the vision modality. However, it must be stressed that the robustness of the current state-of-the-art processing algorithms is far from satisfactory, and they also lack the adaptive power of a human driver.

2.    LIDAR: The most emerging technology is the use of Light Detection and Ranging sensors, which can produce a 3D structure of the vehicle surrounding, thereby increasing robustness as obstacles are more easily detected in 3D. In addition, LIDARs are active sources- thus they are more illuminance adaptive. The LIDAR sensors are however very expensive.

3.    Stereo-vision: Stereo-vision uses two cameras to obtain the 3D information, which is much cheaper in terms of hardware, but requires significant software overhead. It also has poorer accuracy, and leads to more probability error.

4.    Geographic Information Systems: The use of prior geographic database together with known host-vehicle position can in effect replace the on-board processing requirement and enable worldwide ‘blind’ autonomous driving. However, the system needs very accurate positioning in terms of resolution of the vehicle position, as well as updating the geographic database in real-time with changing traffic dynamics and obstacle positions, either by satellite imagery or GPS measurements. The uncertainty in obtaining and updating highly accurate map information over large terrains has constrained it as a complementary tool to on-board processing.

5.    Vehicle Dynamics: The presence of sensors like Inertial Measurement Units (IMUs) provides insight into the motion parameters of the vehicle such as speed, yaw rate and acceleration. This information is used in the temporal integration module, to relate data across several time-frames.

Generic Solutions


The road and lane detection problem can be broken into the following functional modules. The implementation of said modules uses different approaches across different research and commercially available systems, but the ‘generic system’ presented here is present as the holistic skeleton for them.

1.    Image Cleaning: A pre-filer is applied to the image to remove most of the noise and clutter, arising from obstacles, shadows, over and under exposure, lens flare and vehicle artifacts. If training data is available or data from previous frames is harnessed, a suitable region of interest can be extracted from the image to reduce processing.

2.    Feature Extraction: Based on the required subtask low-level features such as road texture, lane marker color and gradient, etc. are extracted.

3.    Model Fitting: Based on the evidence gathered, a road-lane model is fitted to the data.

4.    Temporal Integration: The model so obtained is reconciled with the model of the previous frames, or the GPS data if available for the region. The new hypothesis is accepted if the difference is explainable based on the vehicle dynamics.

5.    Post Processing: After computation of the model, this step involves translation from image to ground coordinates, and data gathering for use in processing of subsequent frames.

Future Prospects


In concluding remarks, we can stress that road and lane segmentation are fundamental problems of Driver Assistance Systems. The extent of complexity can range from passive Lane Departure Warning systems to fully autonomous ‘blind’ drivers. The next step forward is to extend the scope of current detection techniques into new domains, and to improve its reliability. The first requires a better understanding and development of new road-scene models that can capture multiple lanes, non-linear topographies and other non-idealities successfully. The reliability challenge is harder, especially for closed-loop systems, where even small error rates may propagate. It might become essential to include modalities other than vision, and incorporate machine learning to train algorithms better.



                                                                                                   
                                                                             

Tuesday, May 19, 2015

Surround View System for Vehicles and its Advantages for Drivers



ADAS - Advanced Driving Assistance System is a very popular research area all around the globe and has unbelievable future scope. Within ADAS, the latest developing area and market is of Surround view system or Surround vision or Top view system. These systems are meant to provide a central display of the vehicle to the driver from the perspective of a bird's vision.  Hence, another name given to such systems is "Birds' eye view". A glimpse of Surround vision is shown in the figure below. As the name suggests, surround view system provides the view of immediate surroundings to the drivers. Such views are of great assistance to the drivers in precise operations viz., parking maneuvers, driving in heavy traffic conditions etc.




Any bird eye vision system typically involves 4-6 wide angle fish-eye lens cameras mounted all around the vehicle. The installed/mounted cameras have Field of View up to 180 degrees. Such lenses are preferred so that immediate surrounding is completely visible even after the data loss during the implementation of algorithm on captured frames. 

Two types of camera arrangements are generally seen: 

  • 4 cameras: front, back and one on each side view mirrors.
  • 6 cameras: 1 on all four corners, front and back.
Out of these two, the former is most common because of reasons like lesser complications, initial cost-effectiveness etc.

Advantages to Drivers:
  1. Assistance in parking maneuvers because surrounding vehicles and parking slots are easily visible and driver can solely focus on driving rather than peeping into the mirrors for parking safely.
  2. Eliminates the use of mirrors by providing the complete view of surrounding on a single screen.
  3. Any object or vehicle approaching or running close to the vehicle is visible at once.
  4. Being "top-view", the system is free of perspective distortion. In layman's language, drivers are free of constraints like "Objects in the mirror are closer than they appear".
  5. Works properly even on slopes because of reasonably large field of view.
  6. Driver error is reduced or even eliminated, and efficiency in traffic and transport is enhanced.
  7. High-performance driving can be conducted regardless of the vision, weather and environmental conditions.
  8. Many more vehicles can be accommodated, on regular highways but especially on dedicated lanes