May/June 2014 (34, 03) pp. 50-58
0272-1716/14/$31.00 © 2014 IEEE

Published by the IEEE Computer Society
3D Volume Drawing on a Potter's Wheel
Sungmin Cho, Seoul National University

Dongyoub Baek, Seoul National University

Seung-Yeob Baek, Seoul National University

Kunwoo Lee, Seoul National University

Hyunwoo Bang, Seoul National University
  Article Contents  
  What Users See and Do  
  The Wheel and Control Volume  
  The Brush  
  Volume Rendering  
  Evaluating the System  
  Discussion  
  Acknowledgments  
  References  
Download Citation
   
Download Content
 
PDFs Require Adobe Acrobat
 
The proposed 3D-volume-drawing interface can easily create various organic, artistic models. To provide intuitiveness, it adopts the metaphor of the potter's wheel. With one hand, users control a wooden wheel whose rotation is synchronized with that of the virtual space. A 3D depth camera captures the mid-air poses of the users' other hand, which the system translates into a virtual brush for interacting with a model. Through this configuration, the interface enables simultaneous view control and drawing. Also, because the brush's shape imitates the hand pose, the shape can vary. This makes our system flexible and maximizes expressiveness. With it, designers and artists can easily transfer their expertise to the virtual-modeling interface.
Since antiquity, potters have been using a horizontal rotating wheel. The potter places a moist lump of clay on it and rotates it with a variable-speed controller. The potter then uses his or her hands or other tools to shape the clay. Potters can also use various tools with different tip shapes to trim or sculpt the clay. The potter's wheel became widespread not only because of potters’ know-how and the simple shaping process but also because they can use it to materialize their ideas intuitively and straightforwardly. These aspects let them easily create complex, organic shapes.
Today, the trend in artistic design is to exploit the advantages of traditional artistic methods using CAD systems because they reduce the effort of prototyping and creating work. (For more on this, see the sidebar.) The convergence of CAD technology and art has brought many benefits, especially new methodologies for artistic design. However, barriers still stop many artists from using CAD systems.
The most significant barrier is the lack of intuitiveness of conventional keyboard-and-mouse input. So, designers with no engineering background must invest considerable effort to understand CAD modeling mechanisms and workflow. Most important, the lack of intuitiveness interrupts the flow and visualization of creative thinking. Traditional artistic methods reflect the artists’ techniques and artistry throughout the process through direct, intuitive interfacing with various tools. In contrast, CAD systems can be a huge obstacle to the projection of artists’ feelings and emotions.
One solution to this problem might be to exploit the intuitive aspects of the potter's wheel. Toward that end, we propose a shape-modeling system that allows direct, simultaneous interaction between the user's hand and the modeled product. Our system has two interaction channels. The nondominant hand spins the wheel. The dominant hand generates and modifies the model through 3D drawing poses, which are converted to virtual brushstrokes. This system allows the transfer of real-world manufacturing expertise to a virtual environment for 3D design. In addition, it retains pottery's unique characteristics by preserving its essential mechanism of shape formation.
What Users See and Do
Figure 1 shows the setup. The display is approximately 0.7 m in front of the user, which is the minimum distance required to collect the point cloud data from the depth camera. It shows the basic modeling space and side and overhead views of the model. In front of the display are the potter's wheel and four pedals. A Microsoft Xbox Kinect serves as the depth camera, which we position to encompass the region of interest (ROI) above the wheel.
The system's hardware configuration. The system includes tangible controllers and optical sensors.



Figure 1.The system's hardware configuration. The system includes tangible controllers and optical sensors.



We designed it this way for several reasons. First, if the display was closer than 0.7 m, the depth camera would be behind it. Then, it would block the camera's view for recognizing wheel and hand motion. An overhead camera could be a solution, but the Kinect would have to be installed too high (>2.5 m). Moreover, the proper distance between the user and display prevents the user's hand from obscuring the model.
As we mentioned before, users spin the wheel with one hand. When users’ other hand is above the wheel, the selected brush follows its movements in virtual space. Users can create, edit, or smooth the virtual material with the brush.
To activate a modeling mode (create, edit, or smooth), users hold down one of three pedals; each pedal matches the keyboard input information for that mode. When users take their foot off the pedal, the mode deactivates. A fourth pedal switches the brush shape, as we describe later.
The Wheel and Control Volume
The system uses a control volume (CV) to recognize hand movements and control the modeler in the bounding space. Any hand (or physical object) that enters the CV can specify a brush. The system transfers the brush to another virtual space where it changes to scalar-field values. The system uses these values to render the model on the basis of a scalar-field representation. A rendering volume (RV) has a 3D field for saving the scalar-field values at regular intervals. The point cloud data lets the CV process brush acquisition, change the scalar-field values, and render the drawing.
Tracking the Wheel Speed
The wheel comprises a base and a spinning plate. A ball bearing between the two lets users rotate the wheel in any direction with little friction. Under the plate is an upside-down wireless mouse, which uses its optical sensor to detect the wheel's rotation speed. The system coordinates the virtual space's rotation with this sensory input.
The system maps the mouse's distance to the wheel's rotation angle. We found the scaling factor by measuring the mouse movement during wheel rotation at a constant rate. Figure 2 shows the tracking-test results.
Tracking-test results. As you can see, the system accurately synchronized the virtual model with the rotating wheel.



Figure 2.Tracking-test results. As you can see, the system accurately synchronized the virtual model with the rotating wheel.



Setting the Control Volume
To transfer real-world movement to virtual action, we set the CV as the space above the potter's wheel and map the real spatial coordinates to virtual space. A precise mapping is necessary for accurate interaction.
So, we must define the CV in the virtual space to capture the hand movements above the potter's wheel and exclude extraneous movements. To achieve this, we specify a cylindrical volume immediately above the point cloud of the wheel's upper surface.
To specify the CV, we first manually set the ROI around the wheel to exclude the background points. The ROI roughly includes the wheel's upper-surface points and excludes the rest of the wheel (see Figure 3a ).
Configuring the control volume (CV). A depth camera obtains the scattered points (point clouds). (a) Selecting some points of the wheel's upper surface. (b) Determining a plane coinciding with the upper surface. (c) The wheel's exact upper surface. (d) A virtual cylinder, which is the CV.



Figure 3.Configuring the control volume (CV). A depth camera obtains the scattered points (point clouds). (a) Selecting some points of the wheel's upper surface. (b) Determining a plane coinciding with the upper surface. (c) The wheel's exact upper surface. (d) A virtual cylinder, which is the CV.



We then define the upper surface's normal vector to identify the direction of the rotation axis (see Figure 3b ). Accurate determination of the axis is crucial because the accuracy relates directly to the CV rotation's eccentricity.
Next, we use surface fitting to find the upper surface's exact point cloud (see Figure 3c ). To do this, we extract the upper surface's point cloud from the scene, using this discrimination scheme:$$\left |{\left\langle {{\bf{n}},{\bf{x}} - {\bf{c}}} \right\rangle } \right| < \varepsilon,$$$$\left |{\left\langle {{\bf{n}},{\bf{x}} - {\bf{c}}} \right\rangle } \right| < \varepsilon,$$



where n is the upper surface's normal vector, x is the point clouds, c is the ROI's center, and e is the gap (threshold) for detecting the upper surface's point cloud. We can adjust e according to the depth camera used. We found that 1.5 mm was a reasonable value for the depth sensors.
Finally, we determine the CV's center as the average of the extracted points and obtain the axis by surface fitting to determine the CV's position. Figure 3d shows the resulting CV.
The Brush
We use principal component analysis (PCA) of the extracted points to determine the brush's orientation and size. For PCA, we use the set of points near the brush point (see Figure 4a ). We determine the set of points X:$$X = \left\{ {{\bf{x}}\left| {\left\| {{\bf{x}} - {\bf{c}} - \left\langle {{\bf{x}} - {\bf{c}},{\bf{a}}} \right\rangle {\bf{a}}} \right\| - {r_0} < \alpha } \right.} \right\},$$$$X = \left\{ {{\bf{x}}\left| {\left\| {{\bf{x}} - {\bf{c}} - \left\langle {{\bf{x}} - {\bf{c}},{\bf{a}}} \right\rangle {\bf{a}}} \right\| - {r_0} < \alpha } \right.} \right\},$$



where x is the scalar-field position, c is the brush's center, and ${r_0}$${r_0}$ is the distance from the rotation axis a to the brush point. We set the scalar value a as the length of the average person's distal phalanges (fingertip bones), which is approximately 30 mm, to provide feedback in which the modeling occurs at the fingertips. However, users can also draw 3D shapes using a brush that reflects different hand poses (see Figure 4b ).
Dealing with brushes. (a) The relationship between the user input, brush, and scalar field. The scalar field has a greater value at the nearer locations, which generates denser brush strokes. (b) Brush sizes for different hand poses. (c) Various brush shapes.



Figure 4.Dealing with brushes. (a) The relationship between the user input, brush, and scalar field. The scalar field has a greater value at the nearer locations, which generates denser brush strokes. (b) Brush sizes for different hand poses. (c) Various brush shapes.



Brush Shape
Here we describe how the brush changes the scalar-field values in real time for volume rendering. We use the statistical results of the extracted points to produce basically an ellipsoidal brush based on the parameters. To define a brush, we use three eigenvalues (principal values) and three eigenvectors (normalized principal vectors). On the basis of this information, users can adjust the dispersion of the points. So, users can freely change the brush size while interacting with the model. The value of the scalar field's position in the brush depends on the calculation of the superquadric function for the brush:$$f\left( {x',\,y',\,z'} \right) = {\left| {{{x'} \over {\sqrt {{\lambda _1}} }}} \right|^l} + {\left| {{{y'} \over {\sqrt {{\lambda _2}} }}} \right|^m} + {\left| {{{z'} \over {\sqrt {{\lambda _3}} }}} \right|^n}, \tag {1}$$$$f\left( {x',\,y',\,z'} \right) = {\left| {{{x'} \over {\sqrt {{\lambda _1}} }}} \right|^l} + {\left| {{{y'} \over {\sqrt {{\lambda _2}} }}} \right|^m} + {\left| {{{z'} \over {\sqrt {{\lambda _3}} }}} \right|^n}, \tag {1}$$



To facilitate adequate performance for real-time 3D rendering and interaction, the scalar-field values constituting the brush area are only partially localized and refreshed. We apply a time element based on a low-pass filter to the scalar-field value for animation. So, the scalar-field values are specified by a composite function of the integration time for the brush's scalar-field position as it travels and the brush center's distance and position.
In Equation 1, the exponent parameters l, m, and n change the brush shape. Each parameter defines the degree of the shape's corresponding axis elements (see Figure 4c ). When l, m, and n are 2.0, the shape is the original ellipsoid. As l, m, or n decreases, the shape of the corresponding axis becomes sharper; as l, m, or n increases, the shape becomes more rectangular. After pushing the fourth pedal, users can change these parameters with the first three pedals and wheel rotation.
Brushes and the Three Modes
In create mode, users can create and add material to the RV using the brush. To ensure a smooth connection between the created materials, when the user makes a change, we replace each scalar-field value in a position with a new one, except when the new value would be less than the previous one. For the graph in Figure 5a , we used$${f_{{\hbox{new}}}}\left( {\bf{x}} \right) = \left\{ {\matrix{{{1 \over {\left| {{\bf{x}} - {\bf{c}}} \right|}},} & {{f_{{\hbox{pre}}}}\left( {\bf{x}} \right) \le {1 \over {\left| {{\bf{x}} - {\bf{c}}} \right|}}} \cr {{f_{{\hbox{pre}}}}\left( {\bf{x}} \right)} & {{f_{{\hbox{pre}}}}\left( {\bf{x}} \right) \gt {1 \over {\left| {{\bf{x}} - {\bf{c}}} \right|}}} \cr } } \right..$$$${f_{{\hbox{new}}}}\left( {\bf{x}} \right) = \left\{ {\matrix{{{1 \over {\left| {{\bf{x}} - {\bf{c}}} \right|}},} & {{f_{{\hbox{pre}}}}\left( {\bf{x}} \right) \le {1 \over {\left| {{\bf{x}} - {\bf{c}}} \right|}}} \cr {{f_{{\hbox{pre}}}}\left( {\bf{x}} \right)} & {{f_{{\hbox{pre}}}}\left( {\bf{x}} \right) \gt {1 \over {\left| {{\bf{x}} - {\bf{c}}} \right|}}} \cr } } \right..$$



Modifying a model through the (a) create and (b) edit modes. On the left is a diagram of the process, in the middle is the algorithm graph, and on the right are example models. The material can be visualized when the scalar-field value is greater than an isovalue.



Figure 5.Modifying a model through the (a) create and (b) edit modes. On the left is a diagram of the process, in the middle is the algorithm graph, and on the right are example models. The material can be visualized when the scalar-field value is greater than an isovalue.



In edit mode, users can partially delete material. So, the scalar-field value in the user's brush drops to zero. The quantity subtracted is specified by the time the user keeps the brush in one position and the brush center's distance—the opposite of create mode. We animate the material to resemble melting. For the graph in Figure 5b , we used$${f_{{\hbox{new}}}}\left( {\bf{x}} \right) = \left\{ {\matrix{{\left| {{\bf{x}} - {\bf{c}}} \right|} & {{f_{{\hbox{pre}}}}\left( {\bf{x}} \right) \le \left| {{\bf{x}} - {\bf{c}}} \right|} \cr {{f_{{\hbox{pre}}}}\left( {\bf{x}} \right)} & {{f_{{\hbox{pre}}}}\left( {\bf{x}} \right) > \left| {{\bf{x}} - {\bf{c}}} \right|} \cr } \,.} \right.$$$${f_{{\hbox{new}}}}\left( {\bf{x}} \right) = \left\{ {\matrix{{\left| {{\bf{x}} - {\bf{c}}} \right|} & {{f_{{\hbox{pre}}}}\left( {\bf{x}} \right) \le \left| {{\bf{x}} - {\bf{c}}} \right|} \cr {{f_{{\hbox{pre}}}}\left( {\bf{x}} \right)} & {{f_{{\hbox{pre}}}}\left( {\bf{x}} \right) > \left| {{\bf{x}} - {\bf{c}}} \right|} \cr } \,.} \right.$$



The point cloud data acquired by the depth camera contains much noise. The statistical analysis of the points slightly decreases this noise. However, the noise from hand movements and the rough drawing results in rough volume rendering.
Smooth mode addresses this problem. We apply 3 × 3 × 3 Gaussian filtering. Users can select local or global filtering (see Figure 6 ). Local filtering produces a smooth surface by filtering a specific area through brush input. Global filtering makes the whole surface smooth when the brush is away from the wheel's drawing space. Users apply smoothing by tapping the third pedal or pressing it continuously.
Results of template-model-based drawing. (a) Reference models. (b) Results without refinement. (c) Results after global filtering. (d) Results after local filtering and retouching.



Figure 6.Results of template-model-based drawing. (a) Reference models. (b) Results without refinement. (c) Results after global filtering. (d) Results after local filtering and retouching.



Volume Rendering
Because our modeler employs scalar-field-based volume drawing, the marching-cubes algorithm is a natural choice. However, this algorithm is inherently slow because it must travel all the voxels in a given volume. Owing to such inefficiency, voxel resolution is limited in real-time applications.
To solve this problem, we exploit the GPU's computational power. We upload voxel information onto the texture buffer, which is efficiently referenced by the multiple threads. This parallel implementation significantly improves computation speed. We can further improve it by skipping the cells with zero values (for example, with marching cubes using histogram pyramids 1 ).
In our implementation, 128 × 128 × 128 voxels covered the rendering volume. Our GPU-based parallel marching-cubes technique performed smoothly at approximately 30 fps.
Evaluating the System
We performed experiments to determine how well a created model can follow a template model and what types of models users can make with free modeling. We also performed a study to observe users and obtain their feedback.
Template-Model-Based Modeling
This experiment tested how well the system estimates the user-placed brush. We placed a semitransparent template model at the RV's center. Users then rotated the wheel and tried to follow the model's surface with a brush. The results resembled the template model (see Figure 6 ). This showed that users could point to any location in virtual space accurately by interacting in mid-air with the model.
Free Modeling
To produce realistic rendering and artistic results, we used a mental-ray process with Autodesk 3ds Max software. Figure 7 shows the results of several operations—for example, creating symmetrical shapes, making a hole, and twisting material. As we mentioned before, we can change the brush's implicit function through parameters that are exponents of its formula. For example, we applied 0.1 and 0.6 to all exponents (see the first and last models, respectively, in Figure 7b ).
Results and work time using our interface. (a) Models made by participants experienced in 3D modeling. (b) Models by inexperienced participants. We used Autodesk 3ds Max for rendering.



Figure 7.Results and work time using our interface. (a) Models made by participants experienced in 3D modeling. (b) Models by inexperienced participants. We used Autodesk 3ds Max for rendering.



User Study
This small, controlled study involved 10 people from our institution: five with 3D-modeling experience and five without. We briefly introduced them to the wheel, pedals, and work area and gave them less than five minutes of practice. After that, we had them make models freely.
Figure 7a shows the results for the experienced participants; Figure 7b shows the results for the inexperienced participants. There were no discernible differences related to experience. The last two images of Figure 7b were after participants had made models more than three times. As they became more comfortable with the interface, they could concentrate and make more distinctive models.
The participants could easily control the wheel without additional instruction; they spent most of their practice time drawing in mid-air and memorizing pedal functions. At first, they were uncomfortable drawing in mid-air, especially when they had to reposition their hand. However, by using hand movements and wheel rotation simultaneously, they could find the 3D position where they wanted to change the shape.
Some pedal use patterns were related to mid-air interaction. Participants held pedals down to create a model continuously but tapped or pressed pedals carefully and repeatedly to edit or smooth. In other words, they were more careful when editing and smoothing than when creating. Moreover, they spent the most time smoothing to make organic, smooth shapes.
We had exhibited an earlier version of our system over five days at the Siggraph 2012 Emerging Technologies exhibition. It had a fixed-size brush and no pedals. So, users created and edited models by clockwise and counterclockwise rotation, respectively. Approximately 500 people tried the system, and their response was enthusiastic. Spinning the wheel and drawing 3D material with their hands impressed most users. Most of them had no difficulties starting because they didn't have to use a glove-type or handheld device and no other calibration was required.
Some users liked our system because it let them focus on their drawing, even for a long time. So, people sometimes had to wait a while to try the system because the current user was taking so long. Our use of the wireless mouse with the wheel surprised some scientists, who commented that this technique was simple and smart.
However, users reported difficulties; for example, some users couldn't see the inside of the model precisely, and sometimes the brush point was missing. The latter problem was because the system at the exhibition used only the point closest to the depth camera for the brush's position. When noise occurred at that point, and that point was outside the CV, the brush point was missing even if the user's hand was in the CV. Noise from the depth camera also created points in what should have been empty space.
Discussion
Direct, intuitive modeling is possible because our system uses an exact one-to-one mapping from the real world to the virtual space. Also, because our system can also handle different hand poses and positional movements, its modeling capacity is much higher than that of similar modelers. This significantly increases the expressive range. Moreover, our system easily achieves 3D volume modeling without complex navigation or manipulation operations because it facilitates simple, direct physical interaction through the wheel, rather than clumsy gesture recognition. (A video of our system in use is at http://sungmins.com/project/turn.)
Users can freely rotate the wheel and receive immediate information about its speed and direction. In addition, they can estimate the input information for view changes. In contrast, the system gives only visual feedback regarding the brush's state. We know that haptic feedback regarding interaction with an object is an important sensory element and that the lack of physical feedback might increase dependence on visual feedback. However, we felt that mobility and the degrees of freedom of the fingers and hands were more important. Users with haptic-interaction experience gave positive feedback about not having to wear equipment on their hands.
Comparing mid-air interaction and interaction with the tangible wheel is difficult. However, bimanual interaction can increase the degree of manipulation and let the user's hand serve as a physical reference. Additionally, the wheel provides both a grounding reference and a tangible interface. Combining a grounding object with two-handed interaction allows interesting design possibilities. 2
Our system uses a volumetric representation of the shape rather than a boundary representation (B-rep) such as meshes. So, users can freely model topologically complex shapes without any artifacts, as Figure 7 shows. If we had used B-rep, we would have had difficulties updating a shape after a modeling operation. Such updating would require stitching the surfaces, blending them, and performing many other surface-to-surface computations, which would be computationally demanding and generate many artifacts.
Additionally, by defining a custom-formulated implicit function as a brush shape, users can draw, sculpt, or stamp with almost any desired brush shape. This flexibility diversifies the modeling operations, letting us significantly improve the system's modeling capacity.
Although manipulating our virtual clay is similar to manipulating real clay, our virtual clay resembles paint in terms of its physical properties and how it must be handled. However, this hasn't bothered most of our users, who have understood that our system actually is a 3D modeling and drawing interface using the potter's wheel metaphor.
Unlike conventional CAD systems, our system doesn't rely on exact dimensions and geometries. Nevertheless, it has greater flexibility and performs better for specific design tasks such as conceptual design, idea visualization and development, and the modeling of complex organic shapes. Also, as Figure 7 shows, our system can generate aesthetically pleasing models with artistic value.
Regarding technical limitations, when the camera view was perpendicular to the palm direction, the system didn't detect the point cloud data sufficiently to control the brush because of the hand's self-occlusion. In addition, because users could see the inside of the model from only the overhead view, if a user made a model with a shell-like structure, editing inside it wasn't easy.
User feedback has helped us plan our research. For example, the wheel could provide tactile feedback based on the position of the hand performing mid-air interaction.
Our system could work with 3D printers for designing and prototyping real products. Even novices could create and print an organic, artistic shape. To demonstrate this, we printed some models made with our system (see Figure 8 ). So, rather than using potentially complicated computerized devices, users could easily design using real actions and make a real object from a virtual model. Such personal management of design and manufacturing will influence the industry in the near future.
A 3D printer created solid versions of models made with our system. Even novices could create and print an organic, artistic shape.



Figure 8.A 3D printer created solid versions of models made with our system. Even novices could create and print an organic, artistic shape.



This research was supported by the Basic Science Research Program through the National Research Foundation of Korea, funded by the Ministry of Education, Science, and Technology (grant 2013R1A6A3A04058094).
1. C. Dykenet al., “High-Speed Marching Cubes Using HistoPyramids,” Computer Graphics Forum, vol. 27, no. 8, 2008, pp. 2028–2039.
2. K. Hinckley, R. Pausch, and D. Proffitt, “Attention and Visual Feedback: The Bimanual Frame of Reference,” Proc. 1997 Symp. Interactive 3D Graphics (I3D 97), 1997, p. 121.
Sungmin Cho is a researcher at Seoul National University's New Media Lab and a PhD candidate in the university's Department of Mechanical and Aerospace Engineering. His research interests are computer graphics and natural interfaces. Cho received a BS in mechanical and aerospace engineering from Seoul National University. Contact him at sungmins@snu.ac.kr.
Dongyoub Baek is a researcher at Seoul National University's New Media Lab and a PhD candidate in the university's Department of Mechanical and Aerospace Engineering. His research interests are computer graphics, computer vision, and digital geometry processing. Baek received a BS in mechanical and aerospace engineering from Seoul National University. Contact him at b11651@snu.ac.kr.
Seung-Yeob Baek is a postdoctoral researcher in Seoul National University's Institute of Advanced Machinery and Design. His research interests are computational geometry, shape analysis, and human-body modeling. Baek received a PhD in mechanical and aerospace engineering from Seoul National University. During his studies, the Korean Ministry of Education, Science, and Technology awarded him a Global PhD Fellowship. Contact him at bsy86@snu.ac.kr.
Kunwoo Lee is a professor in Seoul National University's School of Mechanical and Aerospace Engineering. His research interests are computer-aided geometric design, assembly modeling, and nonmanifold solid-modeling systems. Lee received a PhD in mechanical engineering from MIT. He's co-editor in chief of Computer-Aided Design, president of the Korean Society of Mechanical Engineers, and dean of Seoul National University's College of Engineering. Contact him at kunwoo@snu.ac.kr.
Hyunwoo Bang is a media artist who has exhibited art at the Siggraph 2008, 2011, and 2013 art galleries; the Ars Electronica Center in Austria; the National Art Center Tokyo; Disseny Hub Barcelona; the Victoria and Albert Museum; and FILE (Electronic Language International Festival) in Brazil. Bang received a PhD in mechanical engineering from Seoul National University, where he previously was an assistant professor. Contact him at savoy@snu.ac.kr.
FIRST
PREV
NEXT
LAST
Page(s):
[%= name %]
[%= createDate %]
[%= comment %]
Share this:
Please login to enter a comment:
 

Computing Now Blogs
Business Intelligence
by Ray Major
Cloud Computing
A Cloud Blog: by Irena Bojanova
Enterprise Solutions
Enterprise Thinking: by Josh Greenbaum
Healthcare Technologies
The Doctor Is In: Dr. Keith W. Vrbicky
Hot Topics
NealNotes: by Neal Leavitt
Industry Trends
Insights
Mobile Computing
Shay Going Mobile: by Shay Shmeltzer
Networking
NGN-Insights: by Martin Nuss and Uday Mudoi
Programming
No Batteries Required: by Ray Kahn
Software
Software Technologies: by Christof Ebert
Sponsored
RESET