ICOT was founded as the central research laboratory of the Fifth Generation Computer Systems project in 1982. It is now celebrating the end of its planned 10- year lifetime and in 1992 entered an 11th year. Since the JTEC panel's visit, we have learned unofficially that ICOT's life will be extended for a total of three more years with a reduced staffing level.
ICOT has concentrated on the development of languages and hardware for parallel logic programming. Because of the centrality of parallel processing in its research, we will necessarily spend significant time reporting on it. However, our interest in ICOT is in its effect on knowledge-based systems research and practice in Japan. We will try to maintain a focus on this aspect of the ICOT research.
The Fifth Generation Computer Systems project was motivated by the observation that "Current computers are extremely weak in basic functions for processing speech, text, graphics, picture images, and other non-numeric data, and for artificial intelligence type processing such as inference, association, and learning" (ICOT 1982). To address these shortcomings, the Fifth Generation project was commissioned to build the prototypes for a new (the fifth) generation of hardware and software. Early ICOT documents (1982) identify the following requirements:
A fifth-generation computer system in this early ICOT vision is distinguished by the centrality of problem solving and inference; knowledge-base management; and intelligent interfaces.
Such a system obviously requires enormous computing power. In ICOT's view, it also required a new type of computing power, one more symbolic and inferential in character than conventional systems. Also, the system was explicitly assumed to rely on very large knowledge bases and to provide specialized capabilities for knowledge and database management. Finally, fifth-generation systems were assumed to interact with people in a more human manner, using natural language in both printed and spoken form.
The early ICOT documents call for a three-tier system. At its base is a tier for knowledge-base systems, which includes parallel database management hardware and knowledge base management software. This system was envisioned as "a database machine with 100 to 1000 GB capacity" able "to retrieve the knowledge bases required for answering a question within a few seconds" (ICOT 1982).
The intention of software for the knowledge base management function will be to establish knowledge information processing technology where the targets will be development of knowledge representation systems, knowledge base design and maintenance support systems, large-scale knowledge base systems, knowledge acquisition experimental systems, and distributed knowledge management systems....One particularly important aim will be semi-automated knowledge acquisition, that is, systems will be equipped with a certain level of learning functions. (ICOT 1982)
Built on top of this are a problem-solving and inference tier, including hardware for parallel inference, abstract datatyping and dataflow machines. This tier includes software for a fifth-generation kernel language (see below), cooperative problem solving mechanisms and parallel inference mechanisms.
The final tier is the intelligent man-machine interface system. This was supposed to include dedicated special purpose hardware for speech and other signal processing tasks and software for natural language, speech graphics and image processing:
The intelligent interface function will have to be capable of handling communication with the computer in natural language, speech, graphics, and picture images so that information can be exchanged in ways natural to man. Ultimately the system will cover a basic vocabulary (excluding specialist terms) of up to 100,000 words and up to 2,000 grammatical rules, with a 99 percent accuracy in syntactic analysis.The object speech inputs will be continuous speech in Japanese standard pronunciation by multiple speakers, and the aims here will be a vocabulary of 50,000 words, a 95 percent recognition rate for individual words, and recognition of processing within three times the real time of speech.The system should be capable of storing roughly 10,000 pieces of graphic and image information and utilizing them for knowledge information processing. (ICOT 1982)
These three tiers were then supposed to support a very sophisticated program development environment to raise the level of software productivity and to support experimentation in new programming models. Also, the basic three tiers were supposed to support a very basic application system. Those listed in the 1982 document include: machine translation, consultation systems, and intelligent programming systems (including automated program synthesis and verification).
It was decided early on that the ICOT systems would be logic programming systems that would build on, but significantly extend, PROLOG. Also, it was decided that the logic programming language would be a kernel language that would be used for a broad spectrum of software, ranging from the implementation of the system itself up through the application layers.
In practice, ICOT's central focus became the development of a logic programming kernel language and hardware tailored to the efficient execution of this language. The system's performance target was to be from 100 MegaLIPS (logical inferences per second -- simple PROLOG procedure calls) to 1 GigaLIPS. As a reference point, ICOT documents estimate that 1 Logical Inference takes about 100 instructions on a conventional machine; a 1 MLIPS machine would therefore be roughly equivalent to a 100 MIPS processor, although this comparison may confuse more than it reveals. The rather reasonable assumption was made that achieving such high performance would require parallel processing: " the essential research and development will concentrate...on high-level parallel architectures to support the symbol processing that is the key to inference" (ICOT 1982). Furthermore, the assumption was made that achieving the desired performance target would require about 1,000 processing elements per system, given reasonable assumptions on the performance of a single such processing element.
ICOT's development plans were segmented into three phases. The goal for the initial phase was to develop a personal sequential inference machine (PSI), that is, a workstation tailored to efficient execution of a sequential logic programming language. This phase was also supposed to develop the system software for a high capability programming environment for fifth-generation software. The initial considerations of parallel systems were also to begin during this stage.
In the second phase, a refined personal sequential inference would be developed, the model for parallel programming would be settled upon and initial exploratory parallel architectures would be prototyped.
The third phase would build the Parallel Inference Machines (PIM). This would include not only the hardware effort, but a parallel operating system and a second- generation kernel language appropriate for parallel processing (Figure 5.1).
During the first three-year phase of the project, the personal sequential inference machine (PSI-1) was built and a reasonably rich programming environment was developed for it.
To put this effort in context, we compare it to the U.S. project which it most resembles: the MIT LISP Machine. The MIT project had begun in the late 1970s and had just reached commercialization at the time of ICOT's inception. Like the MIT machine, PSI was a microprogrammed processor designed to support a symbolic processing language. The symbolic processing language played the role of a broad spectrum kernel language for the machine, spanning the range from low-level operating system details up to application software. The hardware and its microcode were designed to execute the kernel language with high efficiency. The machine was a reasonably high performance workstation with good graphics, networking and a sophisticated programming environment.
What made PSI different was the choice of language family. Unlike more conventional machines oriented toward numeric processing, or the MIT machine, which was oriented towards LISP, the language chosen for PSI was PROLOG. The primary appeal of PROLOG-like languages to ICOT was the analogy between the basic operations of PROLOG and simple rule-like logical inferencing. A procedure in such a language can be viewed as simply reducing a goal to its subgoals. Given
Figure 5.1. ICOT Accomplishments in Sequential and Parallel Systems
the emphasis on inference as a key component of the FGCS vision, the choice seemed quite natural. However, the choice of a logic programming framework for the kernel language was a radical one since there had been essentially no experience anywhere with using logic programming as a framework for the implementation of core system functions.
PSI-1 achieved a performance of about 35K LIPS, comparable to DEC-10 PROLOG, the PROLOG performance of the Symbolics 3600 (one of the follow-ons to the MIT LISP Machine) or Quintus's PROLOG implementation for Sun-3 class machines. This was fast enough to allow the development of a rich operating system and programming environment, but still quite slow compared to the Phase 3 goals (1000 processors achieving 1 GLIPS implies at least 1 MLIPS per processor). Two extended PROLOG-like languages (ESP and KL0) were developed for PSI-1. ESP (Extended Self-Contained PROLOG) included a variety of features such as coroutining constructs, non-local cuts, etc., necessary to support system programming tasks as well as more advanced logic programming. SIMPOS, the operating system for the PSI machines, was written in ESP.
Several hundred PSI machines were built and installed at ICOT and related facilities, and the machine was also sold commercially. However, even compared to specialized LISP hardware in the U.S., the PSI machines were impractically expensive. Dr. Chikayama at ICOT told us during our visit that the PSI (and subsequent) machines had many features whose purpose was to support experimentation and whose cost/benefit tradeoff had not been evaluated as part of the design. In his view, the machines were inherently non-commercial.
Phase 1. During Phase 1, it was decided to explore an "And-Parallel" approach to parallel logic programming. To simplify, this means that the subgoals of a clause are explored in parallel with shared variable bindings being the means of communication. The process solving one subgoal can communicate with a process solving a sibling subgoal by binding a shared variable to a concrete value. It was also observed that subgoals would have to spread out across the network of processors constituting the parallel machine and that it would require careful control to avoid the buildup of communication bottlenecks. By the end of Phase 1, the form of the parallel kernel language was clarified: it was to be a flat guarded horn clause (FGHC) language. A previous JTEC study, JTEC Panel Report on Advanced Computing in Japan (Denicoff 1987), has already reported on this so we will be very brief in explaining the concept. A flat guarded horn clause consists of three parts: 1) head, 2) guard, and 3) body. The head plays exactly the same role as the head of a PROLOG clause: it identifies a set of goals for which the clause is suitable (i.e., those goals which unify with the head). The guard and body collectively play the role of the body of a PROLOG clause, i.e., they are a set of subgoals whose truth implies the truth of the head. However, the body of the clause is not executed until all variables in the guard are applied and all literals in the guard are satisfied. In the case where two or more clauses have heads that unify with the same goal, only the body of that clause whose guard is first satisfied will execute (hence the name guarded horn clause). "Flat" means that the guard can only contain built-in predicates, rather than those which are evaluated by further chaining. This greatly simplifies the mechanisms, without significantly reducing their expressive power.
The execution of an FGHC program is summarized in four rules:
Figure 5.2 shows a set of FGHC for a prime sieve algorithm and how the clauses begin to elaborate a parallel process structure. One should notice that this interpretation model does not lead to an automatic search mechanism as in PROLOG. In PROLOG all relevant clauses are explored, and the order of exploration is specified by the programming model. In FGHC only a single relevant clause is explored; ICOT has had to conduct research on how to recapture search capabilities within the FGHC framework.
Phase 2. The second three-year phase saw the development of the PSI-2 machine, which provided a significant speedup over PSI-1. Towards the end of Phase 2, a parallel machine (the Multi-PSI) was constructed to allow experimentation with the FGHC paradigm. This consisted of an 8 x 8 mesh of PSI-2 processors, running the ICOT Flat Guarded Horn Clause language KL1 (not to be confused with the knowledge representation language KL-ONE developed at Bolt Beranek and Newman). Multi-PSI supported the development of the ICOT parallel operating system PIMOS and some initial small-scale parallel application development. PIMOS is a parallel operating system written in KL1; it provides parallel garbage collection algorithms, algorithms to control task distribution and communication, a parallel file system, etc.
Phase 3. Phase three has centered around the refinement of the KL1 model and the development of massively parallel hardware systems to execute it. KL1 has been refined into a three-level language. KL1-b is the machine level language underlying the other layers. KL1-c is the core language used to write most software; it extends the basic FGHC paradigm with a variety of useful features such as a macro language.
Figure 5.2. Prime Sieve Algorithm Using Guarded Horn Clauses
KL1-p includes the "pragmas" for controlling the implementation of the parallelism. There are three main pragmas. The first of these is a meta-level execution control construct named "shoen" which allows the programmer to treat a group of processes (i.e., a goal and its subgoals) as a unit of execution control. A shoen is created by calling a special routine with the code and its arguments; this creates a new shoen executing the code and all generated subgoals. These subgoals are, however, running in parallel. A failure encountered by any sub-process of a shoen is isolated to that shoen. Each shoen has a message stream and a report stream by which it communicates with the operating system; shoens may be nested but the OS treats the shoen as a single element. Suspending a shoen results in the suspension of all its children, etc. Fine-grain process management is handled by the shoen, freeing the OS from this responsibility.
The second pragma allows the programmer to specify the priority of a goal (and the process it spawns). Each shoen has a minimum and maximum priority for the goals belonging to it. The priority of a goal is specified relative to these.
The third pragma allows the programmer to specify the processor placement for a body goal. This may be a specific processor or a logical grouping of processors. All three of these pragmas are meta-execution control mechanisms which themselves execute at runtime; KL1-p thus allows dynamic determination of the appropriate priority, grouping and placement of processes.
During our visit, Dr. Chikayama mentioned that some of the current software is written in higher level languages embedded in KL1, particularly languages which establish an object orientation. Two such languages have been designed: A'UM and AYA. Objects are modeled as processes communicating with one another through message streams. The local state of an object is carried along in the cyclical call chain from dispatching routine, to service subroutine, back to dispatching routine. Synchronization among processes is achieved through the binding of variables in the list structure modeling the message stream. Parallel Machines. There are five distinct parallel inference machines (PIMs) being developed to execute KL1, each being built by a commercial hardware vendor associated with ICOT. The PIMs vary in processor design and communication network. The abstract model of all PIMs consists of a loosely coupled network connecting clusters of tightly coupled processors. Each cluster is, in effect, a shared memory multiprocessor; the processors in the cluster share a memory bus and implement a cache coherency protocol. Three of the PIMs are massively parallel machines: PIM/p, PIM/m and PIM/c. PIM/k and PIM/i are research machines designed to study specific intracluster issues such as caching and bus communication. Multi-PSI is a medium scale machine built by connecting 64 PSIs in a mesh architecture. PIM/m and Multi-PSI do not use a cluster architecture (but may be considered as degenerate cases having one processing element per cluster).
The main features of the PIM communication systems are shown in Table 5.1. Relevant features about the processing elements' implementation technology are shown in Table 5.2.
Main Features of Communication Systems
It should be noted that cycle times for processing elements are relatively modest. Commercial RISC chips have had cycle times lower than these for several years (the lower the cycle time, the faster the instruction rate). Newly emerging processor chips (such as the DEC ALPHA) have cycle times as low as five nsec. Even granting that special architecture features of the PIM processor chips may lead to a significant speedup (a factor of three, at most), these chips are disappointing compared to the commercial state of the art. The networks used to interconnect the systems have respectable throughput, comparable to that of commercially available systems such as the CM-5. In certain of the PIMs each processor (or processor cluster) can have a set of disk drives; this may allow more balance between processing power and I/O bandwidth for database applications, but there is as yet no data to either confirm or refute this.
Processing Elements' Implementation Technology
ICOT's software efforts have been layered (see Figure 5.3). So far, we have discussed the bottom-most layer, that concerned with the operating system and language runtime system for parallel logic programming. On this foundation, ICOT has pursued research into databases and knowledge base support, constraint logic programming, parallel theorem proving, and natural language understanding.
In the area of databases, ICOT has developed a parallel database system called Kappa-P. This is a nested relational database system based on an earlier ICOT system called Kappa. Kappa-P is a parallel version of Kappa, re-implemented in KL1. It also adopts a distributed database framework to take advantage of the ability of the PIM machines to attach disk drives to many of the processing elements. Quixote is a knowledge representation language built on top of Kappa-P. It is a constraint logic programming language with object-orientation features such as object-identity, complex objects described by the decomposition into attributes and values, encapsulation, type hierarchy and methods. ICOT also describes Quixote as a deductive object oriented database (DOOD). Quixote and Kappa-P have been used to build a molecular biological database and a legal reasoning system ("TRIAL").
Figure 5.3. ICOT's Layered Approach to Software Development
ICOT has been one of the world-class centers for research into constraint logic programming. All such languages share the idea of merging into a logic programming context constraint solvers for specific non-logical theories (such as linear equations or linear inequalities). Two languages of this type developed at ICOT are CAL (Constraint Avec Logique) which is a sequential constraint logic programming language which includes algebraic, Boolean, set and linear constraint solvers. A second language, GDCC (Guarded Definite Clauses with Constraints), is a parallel constraint logic programming language with algebraic, Boolean, linear and integer parallel constraint solvers.
Another area explored is automatic theorem proving. ICOT has developed a parallel theorem prover called MGTP (Model Generation Theorem Prover). This is written in KL1 and runs on the PIMs. MGTP has obtained a more than 100-fold speedup on a 128 processing element PIM/m for a class of problems known as condensed detachment problems. MGTP is based on the model generation proving methods first developed in the SATCHMO system. However, the ICOT version uses the unification hardware of the PIMs to speed this up for certain common cases. MGTP has been used as a utility in a demonstration legal reasoning system. It has also been used to explore non-monotonic and abductive reasoning. Finally, MGTP has been employed in some program synthesis explorations, including the synthesis of parallel programs.
Natural language processing has been a final area of higher level support software developed at ICOT. There have been several areas of work: 1) a language knowledge base consisting of a Japanese syntax and dictionary; 2) a language tool box containing morphological and syntax analyzers, sentence generator, concordance system, etc.; and 3) a discourse system which rides on top of the first two. These are combined in a parallel, cooperative language understanding system using type inference. The dictionary has about 150,000 entries of which 40,000 are proper names (to facilitate analysis of newspaper articles).
On top of these tools a variety of demonstration application systems have been developed. The following were shown running on the PIM machines at the 10th anniversary FGCS conference are listed here:
When asked what they regarded as their legacy -- the core achievement of the 10-year ICOT program -- both Dr. Fuchi and Dr. Chikayama said that it was KL1, as opposed to the PIM hardware, the higher level software or the application demos.
Three key developments in KL1 are worth noting.
First, the language itself is an interesting parallel programming language. KL1 bridges the abstraction gap between parallel hardware and knowledge-based application programs. It is a language designed to support symbolic (as opposed to strictly numeric) parallel processing. It is an extended logic programming language which includes features needed for realistic programming (such as arrays). However, it should also be pointed out that like many other logic programming languages, KL1 will seem awkward to some and impoverished to others.
Second is the development of a body of optimization technology for such languages. Dr. Chikayama noted that efficient implementation of a language such as KL1 required a whole new body of compiler optimization technology. ICOT has developed such a body of techniques. Because there are several architecturally distinct PIMs (and the Multi-PSI), ICOT has been forced to develop a flexible implementation strategy for KL1. KL1 is compiled into an intermediate language, KL1-B. KL1-B is the abstract language implemented by each hardware system; it is a hardware model of a coupled multiprocessor in which some processors are linked in tightly coupled clusters. To build a KL1 implementation, the architect must transform the abstract KL1-B specification into a physical realization. This is done semi-automatically.
The third achievement is noticing where hardware can play a significant role in supporting the language implementation. Part of the architecture of the PIM (and PSI) processors is a tag-checking component which provides support for the dynamic type checking and garbage collection needed to support a symbolic computing language. This small amount of additional hardware provides a high degree of leverage for the language implementation without necessarily slowing down the processor or introducing undue complexity to the implementation. Such features, which might also support LISP and other more dynamic languages, may eventually find their way into commercial processors.
At the time of the JTEC team's visit to ICOT in March 1992, the first of the PIMs (PIM/m) was running PIMOS and KL1 reliably. This machine was still awaiting the arrival of modules providing additional processors. The second and third PIMs were being installed and had begun to execute KL1 code and parts of the OS. Multi-PSI was also available for experimentation. At the time of the 10th anniversary Fifth Generation Conference in June 1992, the remainder of the PIMs were running and were demonstrated.
The early ICOT documents suggested a push towards very advanced and very large-scale knowledge-based systems. Actually, the core ICOT efforts went off in a different direction. The central perspective of ICOT has been to develop parallel symbolic programming (in particular, parallel logic programming) by developing a new language and by developing experimental hardware to support the language.
The demo applications for the PIM machines seem comparatively routine. Even though each of these programs demonstrates the power of parallelism and each embeds some advance in parallel programming, when viewed as knowledge-based systems, these systems bring little new to bear. For example, the multi-sequence matching program has a new approach to simulated annealing which uniquely capitalizes on the available parallelism; however, it knows essentially nothing about genetics and proteins. This doesn't make the program useless or uninteresting; it is solving a task for which crunch seems to dominate over knowledge. However, from our perspective of studying advances in knowledge-based systems technology, the program is disappointing. The VLSI routing program is subject to the same critique. Of the programs demonstrated, the legal reasoning system is the only one which might be fairly termed a knowledge-based system. Here, parallelism was used to accelerate both case retrieval and logical argumentation. Nevertheless, for all the computational power being brought to bear, the system did not seem to establish a new plateau of capability.
The early ICOT documents discuss the management of very large knowledge bases, of large-scale natural language understanding and image understanding. Also, the early documents have a strong emphasis on knowledge acquisition and learning. Each of these directions seems to have been either dropped, relegated to secondary status, absorbed into the work on parallelism, or transferred to other research initiatives (such as EDR).In summary, in answer to the question, "has ICOT directly accelerated the development of knowledge-based technology in Japan so far?," the answer would have to be "no."
However, there are other questions which are also relevant:
The answer to question 1 is almost certainly "yes." Our hosts at virtually every site we visited said that ICOT's work had little direct relevance to them. The reasons most frequently cited were: the high cost of the ICOT hardware, the choice of PROLOG as a language, and the concentration on parallelism. However, nearly as often our hosts cited the indirect effect of ICOT: the establishment of a national project with a focus on fifth-generation technology had attracted a great deal of attention for artificial intelligence and knowledge-based technology. Our hosts at several sites commented on the fact that this had attracted better people into the field and lent an aura of respectability to what previously had been regarded as esoteric. One professor told us that AI now gets the best students, and that this had not been true before the inception of ICOT and the fifth-generation project.
Question 2 is considerably more difficult to answer. ICOT's work has built an elegant framework for parallel symbolic computing. Most AI experts agree that without parallelism there will ultimately be a barrier to further progress due to the lack of computing power. However, this barrier does not seem imminent. Workstations with more than 100 MIPS of uniprocessor performance are scheduled for commercial introduction this year. With the exception of those sub-disciplines with a heavy signal processing component (e.g., vision, speech, robotics) we are more hampered by lack of large-scale knowledge bases than we are by lack of parallelism. It is, however, quite possible that in the near future this will be reversed and we will be in need of parallel processing technology to support very large-scale knowledge-based systems. We will then be in dire need of programming methodology and techniques to capitalize on parallel hardware. Should this occur, ICOT's work might provide Japanese researchers with a significant advantage.
This, however, depends on the answer to question 3: "Has the ICOT research significantly impacted parallel computing technology?" There are arguments to be made on both sides of this question. On the positive side, we can argue that KL1 is an interesting symbolic computing language. Furthermore, it is a parallel symbolic computing language, and virtually no interesting work has been done elsewhere for expressing parallel symbolic computation. Another positive point is that ICOT will have the test bed of the several PIM machines with which to continue experimentation. This is an unusual opportunity; no other site has access to several distinct implementations of the same virtual parallel machine. It is not unreasonable to expect significant insights to emerge from this experimentation. Finally, we can add that ICOT has confronted a set of interesting technical questions about load distribution, communication and garbage collection in a parallel environment.
On the negative side we may cite several arguments as well. ICOT built for itself a relatively closed world. In both the sequential and parallel phases of its research, there has been a new language developed which is only available on the ICOT hardware. Furthermore, the ICOT hardware has been experimental and not cost-effective for practical applications. This has prevented the ICOT technology from having any impact on, or enrichment from, the practical considerations of the industrial and business worlds.
Earlier we pointed to similarities between the ICOT PSI systems and the MIT LISP machine and its commercial successors. It is noteworthy that only a few hundred PSI machines were sold commercially, while there were several thousand LISP machines sold, some of which continue to be used in important commercial applications such as American Express's Authorizer's Assistant. The one commercial use we saw of the PSI machines was at Japan Air Lines, where PSI-2 machines were employed. (Ironically, they were re-microcoded to LISP machines.) Furthermore, the MIT LISP machine acted as a catalyst, providing a powerful LISP engine until better implementation techniques for LISP were developed for stock hardware. As knowledge-based technology has become more routinized in both the U.S. and Japan, commercial KBS tools have been recoded in C. In the U.S., the AI research community continues to use LISP as a vehicle for the rapid development of research insights; there seems to be little such use of the ICOT-based technology in Japan.
The PIM hardware seems destined for the same fate. The processing elements in the PIMs have cycle times no better than 60 ns; even assuming that the features which provide direct support for KL1 offer a speedup factor of three, this leaves the uniprocessor performance lagging behind the best of today's conventional microprocessors. Both HP and DEC have announced the imminent introduction of uniprocessors of between 100 and 200 MIPS. The interconnection networks in the PIMs do not seem to constitute an advance over those explored in other parallel systems. Finally, the PIMs are essentially integer machines; they do not have floating-point hardware. While the interconnection networks of the PIMs have reasonable performance, this performance is comparable to that of commercial parallel machines in the U.S., such as the CM-5.
It is interesting to compare the PIMs to Thinking Machines' CM-5. This is a massively parallel machine, a descendant of the MIT Connection Machine project. The CM-5 is the third commercial machine in this line of development. It can support a thousand SPARC chips (and presumably other faster microprocessors as they arise) using an innovative interconnection scheme called Fat Trees. Although the Connection Machine project and ICOT started at about the same time, the CM-5 is commercially available and has found a market within which it is cost-effective. One reason for this is that it has quite good floating point performance. It appears that the only established market for massive parallelism is in scientific computing, leaving the PIMs with a disadvantage that will be difficult to overcome.
Our hosts at ICOT were not unaware of these problems. Dr. Chikayama mentioned a project to build a KL1 system for commercially available processors. This would decouple the language from the experimental hardware and make it more generally available. This greater availability could in turn allow a greater number of researchers whose interests are in large knowledge-based systems to begin to explore the use of the KL1 paradigm. Given their implementation strategy, this should not be an overwhelming task. One of the PIM hardware designers has also designed another parallel system (the AP-1000, which is a mesh connected system of about 1000 SPARC chips). This might be a likely target for such an effort.
In contrast to the Connection Machine efforts (and virtually all other parallel system efforts), which have increasingly focused on massively parallel scientific computation, the ICOT effort has continued to focus on symbolic computing. In contrast to the MIT LISP Machine efforts, which didn't achieve enough commercial viability to afford a push forward into parallelism, ICOT has had sustained long term government funding which has allowed it to persevere. Thus, it has remained the only research institution in Japan with a focus on massively parallel symbolic computing.